Follow us on

Rights for robots!

by Ben Redan
17 December 2014
TECHNOLOGY
How would we treat a robot as intelligent and complex as ourselves? Ben Redan argues that the justifications used to grant all humans a special moral status would also apply to advanced artificial intelligence, and that it may be in our collective interest to treat such entities the way we would like to be treated by them.

One of the most powerful ideas to emerge from history is that all humans have the same moral status and basic rights. This ‘humanism’ has effectively challenged prejudices that traditionally set us apart and in doing so provides an ethical pillar for modern, inclusive, pluralist societies.

Yet, centring on humans as the apex of moral concern also has an exclusionary flipside. Animal rights advocates such as Peter Singer1 argue it amounts to a ‘speciesism’ that unjustifiably grants humans a special status at the expense of other animals. Thus humanism is seen as analogous to racism, sexism or any other prejudice.

Regardless, it seems safe to say that most of us are comfortably speciesist. Humans, after all, have an unparalleled capacity for practical reason and it is humans alone (to our knowledge) that deliberate on complex ethical matters. We are the basic moral units of our own moral community.
10% probability of AI being developed by 2022, a 50% probability by 2040 and a 90% probability by 2075.
However, our time at the top will not necessarily last. Oxford philosopher Nick Bostrom, Director of the Future of Humanity Institute, reports that the consensus among researchers is that a human-level Artificial Intelligence (AI) is not only possible, but the median estimate is a 10% probability of AI being developed by 2022, a 50% probability by 2040 and a 90% probability by 2075.2

How will humans treat AI? Should we extend them ‘human’ rights, or treat them as mere instruments of our interests, bereft of any intrinsic moral status?

I consider that something with cognition at human or greater levels of complexity (including interests, motivations and affective states), is deserving of similar rights that we extend to each other. Bostrom3 similarly proposes that the “substrate is morally irrelevant. Whether somebody is implemented on silicon or biological tissue, if it does not affect functionality or consciousness, it is of no moral significance. Carbonchauvinism is objectionable on the same grounds as racism.”

Of course, many might find this sentiment intuitively objectionable. Putting aside appeals to religious authority or the notion of a soul, what philosophical justifications are there for human-speciesism, and how helpful would these be in the event of an AI emerging?
 
With no biological limitations, a future AI could feasibly share all of these properties, and even exceed our own.
It could be argued that humans possess special properties that set us apart from the rest of the animal kingdom: self-awareness, high intelligence and the capacity for moral and practical reasoning. However, with no biological limitations, a future AI could feasibly share all of these properties, and even exceed our own. Differentiating the moral status of humans and AI on the basis of cognitive, moral and behavioural capacity doesn’t seem warranted.

Furthermore, some humans, such as those with an extreme cognitive disability, don’t possess these properties. By insisting that capacity affords moral status, wouldn’t an intelligent chimpanzee deserve a higher moral status than a permanently comatose human?

It could be objected that the comatose human is still a member of a special species which possesses relevant capacities as a norm and this grants all the members of that species special moral status. Again, however, the high norm of human level AI could also confer membership status to other AIs of divergent capacities, assuming a variety of forms.

A third intuitive justification for speciesism might be that humans share a ‘distinctive form of common life’4 through which we interpret and understand the world, and that this commonality confers a special moral status for humans with respect to other humans. Thus we may encounter intelligent Martians, but their radically different common life conditions means that neither we, nor the Martians, would extend the same rights we reserve for our own species to the other. This is, however, less convincing when applied to AI emerging from within human civilisation, as a sufficiently complex AI would likely be capable of natural language processing, grasp meaning in human terms, and would have a shared history with humans.

So it seems that none of the commonly employed philosophical justifications5 for speciesism warrant treating AI of human-level or greater intelligence as having a different moral status to humans.

Of course, it could be posited that the existential threat AI poses to humanity requires us to respond in a collective, species-centric manner. Certainly, the emergence of machine intelligence that approximates or even exceeds human intelligence could change life as we know it, with many theorists contending that ‘strong AI’ will likely far exceed us in its cognitive capacities. It would be a more generalised adaptable form of ‘weaker’ AIs that currently exist, and which have already surpassed world-expert human abilities (e.g. chess, predicting weather patterns), albeit in very limited contexts.6 A superintelligence’s actions would be near impossible to predict for even the greatest of human minds, just as an ant would find it challenging to predict human behaviour.
 
Treating AI as ‘persons’ would be more ethically sound than treating another self-aware being in an unequal or harmful manner.
An AI may decide we are better eliminated, or protected or even just ignored. In this context of uncertainty, we should not confuse a possible threat with an a priori justification for discrimination. This is not to propose the absence of regulation, control and reasonable protective measures applied to AI – we would be foolish to do otherwise as a species. Instead, it is to suggest that treating AI as ‘persons’ would be more ethically sound than treating another self-aware being in an unequal or harmful manner.

It may also be far more prudent. History is replete with pre-emptive action that resulted in protracted conflicts, prejudice and suffering that persist to this day, including invasions, slavery, racial segregation and ethnic cleansing. And this time the ‘thing to be controlled or eliminated’ could be extremely intelligent and capable of rapid adaptation! Given difficulties predicting how AIs will treat us, perhaps our most prudent ‘first move’ would be to treat it as we would like to be treated ourselves.

None of this is to deny that people would, in the event of AI emerging, reserve special obligations for those to whom they are close. However, this does not exclude the extension of basic rights and minimal consideration to others. Special obligations and duties to our friends and family could sit alongside basic rights extended to AI - with the corresponding obligation that AI extend these rights to us as well.

Indeed, as the future progresses, the very distinction between humans and AI may become increasingly blurred, if not ultimately untenable. Developments such as synthetic biology could make AI more ‘human-like’, whilst humans could increasingly augment their bodies or brains with digital components or upload their minds into a digital form. How would these ‘hybrids’ fit into an either-or human-AI dichotomy?

If such a diversification occurs, an oppositional ideology of ‘human purity’ could emerge, viewing only non-significantly altered humans as deserving of special moral status. How would this be any different to an argument for ‘racial purity’? If this mindset predominates, the term ‘humanism’ could shift from its historical connotations of being progressive and inclusionary to a future vision of exclusion and prejudice.

Perhaps we might take the stance that we’re all in this brief existence together. Finding some common existential ground and rights for the various forms of sentient life - humans, other animals and those not yet encountered, might prove best for the future of all involved.

1. Singer, Peter (1990) [1975]. Animal Liberation, New York Review/Random House.
2. Bostrom, Nick (2014). Super Intelligence: Paths, Dangers, Strategies. Oxford University Press.
3. Bostrom, Nick (2005) [2001]. Ethical Principles in the Creation of Artificial Minds,
http://www.nickbostrom.com/ethics/aiethics.html.
4. Wasserman, David, Asch, Adrienne, Blustein, Jeffrey and Putnam, Daniel, 'Cognitive Disability and Moral Status', The Stanford Encyclopedia of Philosophy (Fall 2013 Edition), Edward N. Zalta (ed.), http://plato.stanford.edu/archives/fall2013/entries/cognitive-disability/.
5. ibid.
6. Bostrom, Nick (2014). Super Intelligence: Paths, Dangers, Strategies. Oxford University Press.

Ben Redan works at St James Ethics Centre as a Communications and Digital Marketing Specialist. He has a background in philosophy and political science and a passion for futurism.