If AI becomes self-aware, will we need to give it the rights we normally reserve for humans?

Industries: Government
  • The basic question is personhood, not intelligence
  • Does artificial intelligence need to be protected from people?
  • What we actually need to do is protect people from themselves
  • Legal abuse is a legitimate cause for concern

As artificial intelligence (AI) and robotics advance, it’s only a matter of time before machines develop sentience that rivals our own. In a variety of white-collar jobs, for instance, advanced AI can already outshine its human counterparts. But as they become smarter, they raise troubling ethical questions about what philosophers call ‘personhood’, accountability, and legal rights.

When does a machine deserve the fundamental respect we normally reserve for human beings? A variety of experts have weighed in on this pressing question, and what they have to say just might surprise you.

The basic question is personhood, not intelligence

Artificial intelligence like IBM’s Watson or Enlitic’s diagnostic algorithms are already plenty smart. Defeating two Jeopardy champions, comprehending state-of-the-art medical research, or diagnosing complex diseases are nothing to sneeze at when it comes to judgments of competence. But the real question isn’t whether a machine is as smart as a human, but whether it qualifies as what ethicists and philosophers call a ‘person’. “The three most important thresholds in ethics are the capacity to experience pain, self-awareness, and the capacity to be a responsible moral actor,” notes the sociologist and futurist James Hughes, the executive director of the Institute for Ethics and Emerging Technologies.

A handshake between a robotic and a human hand both dressed in business suits
“The three most important thresholds in ethics are the capacity to experience pain, self-awareness, and the capacity to be a responsible moral actor,” notes the sociologist and futurist James Hughes, the executive director of the Institute for Ethics and Emerging Technologies.

This isn’t a question of intelligence, however. A self-aware toy that can feel pain – whatever its cognitive abilities – counts. A very smart system like Watson, on the other hand, doesn’t. Take Pleo, for example. A small robot designed to look like a dinosaur, it reacts to touch and interaction. And it has clear preferences: if you flip Pleo onto it’s back, it asks you not to do that and whines until you put it upright again. Kate Darling, a researcher at the MIT Media Lab, tested people’s empathy. She had them play with the cute robot a bit, and then asked them to destroy it. Very few were willing to hurt the tiny thing, even though it can’t feel pain and isn’t self-aware. As Darling says, “People are primed, subconsciously, to treat robots like living things, even though on a conscious level, on a rational level, we totally understand that they’re not real.” The more they look like us, the more we depend on them, the stronger the urge to anthropomorphise them, providing personality where there really isn’t any.

But what happens when there is? If Pleo was self-aware, if it really was afraid of being turned upside-down, would it be ethical to hurt it?

Does artificial intelligence need to be protected from people?

“Traditionally, under the law, you’re either a person or you are property — and the problem with being property is that you have no rights. In the past, we’ve made some pretty awful mistakes,” observes Linda MacDonald-Glenn, a bioethicist and attorney-at-law. You can think here about the horrors of slavery, for instance. Where we draw the line that provides rights is everything. Would we be comfortable with self-aware, fearful slaves as long as they were ‘only’ machines? Would you really ‘own’ C3PO and R2D2, the droids from Star Wars?

“We talk about protecting ourselves from AI, but what about protecting AI from us?” asks Raya Bidshahri, a science writer for Singularity Hub. For some experts, the crux of the issue is us, not the looming threat of AI overlords.

What we actually need to do is protect people from themselves

One complex aspect of this question is the issue of what mistreating robots means for us.
As Woodrow Hartzog, a professor of computer science and expert on the law, explains, “Do we want to prohibit people from doing certain things to robots not because we want to protect the robot, but because of what violence to the robot does to us as human beings?” Learning to act out our desires – good or bad – without limitation might be a recipe for disaster. This is an issue that’s already been explored in science fiction. Think about Blade Runner’s replicants or Westworld’s robotic theme park. Is it OK to abuse a sentient machine? And what would saying ‘yes’ mean – for us?

We might consider granting some rights to robots, what experts call negative freedoms. These would protect machines from certain forms of treatment rather than provide positive rights to our creations. For instance, we might make sex androids illegal or criminalise violence toward anthropomorphic robots – not to protect the machines, but rather to prevent actions that reinforce sociopathy.

Legal abuse is a legitimate cause for concern

But Joanna J. Bryson, a reader at the University of Bath and co-author of a recent paper with the lawyers Mihailis E. Diamantis and Thomas D. Grant, disagrees. These intentions to protect robots is well-meaning but misplaced – and potentially open for abuse. “Corporations are legal persons, but it’s a legal fiction. It would be a similar legal fiction to make AI a legal person,” Bryson reasons. “What we need to do is roll back, if anything, the overextension of legal personhood — not roll it forward into machines. It doesn’t generate any benefits; it only encourages people to obfuscate their AI.”

Consider, for instance, that when a corporation does something wrong – say, kill a person through negligence – it can’t really be punished. Although it’s a legal entity, jail means nothing to a business. She worries that if robots are granted personhood, gaining the legal rights of people, we’d be opening the door for legal loopholes. If self-driving cars, for instance, are legal people, they’d be fulfilling their own contracts, creating tax shelters for the companies that ‘own’ them. And the consequences for legal concepts like criminal responsibility could get very tricky indeed. Can self-aware robots commit crimes? Be tried? Go to jail? How would this all work?

What’s clear is that there are no easy answers. And it’s just as clear that we need to begin thinking through these issues before we’re faced with a machine we can’t deny deserves personhood.

Industries: Government
We’re in the midst of a technological revolution and the trends, technologies, and innovations to look out for are all game-changers. They bring competitive advantages, increase the effectiveness of operations, make our daily lives more efficient, improve healthcare, and significantly change the landscape and beyond.

Free trendservice

Receive the latest insights, research material, e-books, white papers and articles from our research team every month, for free!