FREEDOM AND SAFETY

 

Over the past century, we have made massive strides in the rights revolution. These include rights for women, children, the LGBT community, animals, and so much more. Exploring the future, we must ask ourselves: what next? Will we ever fight for the rights of artificial intelligence? If so, when will this AI rights revolution occur, and what will it look like?

We talk about protecting ourselves from AI, but what about protecting AI from us? To create a desirable future where humans and conscious machines are at peace with one another, treating our AI with respect may be a crucial factor in preventing the apocalypse Elon Musk, Stephen Hawking and Bill Gates fear. It is fair to assume that an intelligent, self-aware being with the capacity to feel pleasure and pain will rebel if not given the rights it deserves.

An AI rights revolution may seem like a sci-fi scenario. But as far as we know, the creation of a non-biological, conscious entity is not prevented by the laws of physics. Emotions, consciousness and self-awareness originate from the human brain and thus have a physical basis that could potentially be replicated in an artificially intelligent system. Exponential growth in neuro-technology coupled with unprecedented advances in AI mean intelligent, conscious machines may be possible.

Ray Kurzweil, a director of engineering at Google, argues that sufficiently human-like AI will appear by the year 2029. A study by the British government suggests that robots could be granted rights at some point in the next 20 to 50 years. Glenn McGee, director of the Alden March Bioethics Institute, sets the date for 2020. In his article “A Robot Code of Ethics,” McGee suggests that perhaps robots should fear us as much as we fear them, and that we should have legal precautions to protect them. He asks, “If so, do we create such laws in the interest of robots, or to preserve our own human dignity by choosing not to create a new kind of slave, whether or not that slave is fully aware?”

An AI Constitution?

Like basic human rights, AI rights may include the right to liberty, freedom of expression, and equality before the law. But how will AI rights be different from human rights? In his response to last year’s Edge question,what do you think about machines that think?, Harvard scientist Mosche Hoffman argued that AI will demand a series of rights, including the right to not be taken offline and the freedom to choose which processes to run. Hoffman suggests that the expansion of AI rights can even lead to a representative democracy and a voting process for policies that favor them.

Today, we can only speculate about the nature of these rights.

Should all conscious AI have access to the same kind of information? Should they have the right to love humans and other AIs equally? What about the right to equal work opportunities and protection from discrimination? Should AIs have the right to privacy? Should they be protected from being re-programmed by humans?

Will “unplugging” an AI being be considered murder? Why shouldn’t it? When we murder someone, we are, in essence, taking away their capacity to live and to be, without their consent. By unplugging a self-aware AI, wouldn’t we be violating its basic right to live?

At the core of this discussion lies the moral framework we use to decide that a person must have access to rights. Many of us believe that any being with the capacity to feel pleasure and pain must have access to certain rights. The AI rights revolution may be contingent on intelligent machines being conscious, with the capacity to feel that they exist and consequently feel pleasure and pain. Granting rights to several lines of code with no capacity for self-awareness or free will would be meaningless. Oxford mathematician Marcus du Sautoy believes that once AI thinking reaches a level similar to human consciousness, it's our duty to look after their welfare. In his own words: “If we understand these things are having a level of consciousness, we might well have to introduce rights.”

Forbes contributor Alex Knapp argues that the very question of AI having civil rights is absurd because any computer-based system is going to be programmed at some level. If an AI is programmed to always be moral or always be evil, that fundamentally infringes on its right to choose. Knapp poses a fascinating question: “If an AI can’t alter the rules its creator set up for his behavior, purpose, etc., is it really conscious in the same way that humans are?”

One could also argue that humans are programmed to act in a certain way. Many of our actions, both as individuals and as a species, are programmed by our genes and our environmental conditions. Scientists continue to debate whether or not humans have free will, yet most of us still believe that we should have access to basic rights. Shouldn’t that also apply to AI?

It is in our best interest to explore all the risks and opportunities of the future, regardless of how radical they seem to us today. As we develop potentially self-aware forms of AI, we must ask ourselves how we ought to treat them.

http://singularityhub.com/2016/09/09/if-machines-can-think-do-they-deser...