Saudi Arabia Grants Citizenship To A Robot: Questions Arise About The Future

Hanson Robotics' Sophia has become the first robot to gain citizenship, granted it by Saudi Arabia. Photo: Hanson Robotics

The robot Sophia, created by Hong Kong-based company Hanson Robotics, was until very recently most famous for beating Jimmy Fallon in a game of rock-paper-scissors. However, Sophia is likely to go down in history for something else, being the first robot to be granted citizenship by a country.

Saudi Arabia recently granted official citizenship to Sophia, who can easily mimic human facial expressions and engage in witty dialogue. While the incredible lifelike-ness of Sophia is no doubt an achievement for Hanson Robotics, Saudi Arabia’s decision to give Sophia citizenship has raised many questions about what the role of humanoid robots and artificial intelligence will be in the future. It is unknown what kind of rights robots will enjoy in the future, or if they should even be given rights. What is undeniable is that the decision by Saudi Arabia has forced us to think harder about the future and our increasingly close relationship with robots.

Problems With Legality and Identity

Soon after the news broke that Saudi Arabia had given Sophia citizenship, some people observed the apparent irony that Sophia seemed to have rights that real women in Saudi Arabia did not. Sophia was allowed to make an appearance without wearing a headscarf, and speak without being accompanied by a man. Both actions are illegal under the law in Saudi Arabia. Saudi Arabia has had a history of struggling with human rights and has just recently allowed women to drive.

Many people sounded off on Twitter and other social media sites with the opinion that worrying about granting citizenship and rights to robots is jumping the gun, putting faith in a technology that has not yet been proven trustworthy, and distracting people from the issue of rights for those who still don’t have them.

Researcher in AI ethics at the University of Bath, Joanna Bryson notes that the larger conversation should revolve around whether or not we grant some form of AI personhood, and not around humanoid robots, which can look very sophisticated without actually being sophisticated from an artificial intelligence perspective.

Bryson sees the notion of awarding personhood to AI as troubling because it would essentially allow firms that utilize complex AI to “pass off both legal and tax liability to these completely synthetic entities.

“Basically the entire legal notion of personhood breaks down,” says Bryson.

Huessein Abbass, an AI researcher at the University of New South Wales-Canberra, echoes the sentiment and argues that we don’t even have a very good grasp of the concept of identity for AI, which makes giving a robot or AI personhood difficult.

Says Abbass:

To me, identity is a multidimensional construct. It sits at the intersection of who we are biologically, cognitively, and as defined by every experience, culture, and environment we encountered. It’s not clear where Sophia fits in this description.

Don’t Be Cruel

Kate Darling, a robotics researcher and robot ethics expert at MIT’s Media Lab, says that resolving the problem of competing definitions is key to having any kind of constructive dialogue regarding robot/AI rights. Darling says the term “robot” doesn’t even have a useful universal definition right now.

Darling argues that while it may be a long time before AI reaches a state of sophistication befitting a form of recognition like personhood, the discussion about how we treat machines is worth having now because it can impact how we treat each other and how we define our humanity.

Darling says we can’t help but personify robots and machines that move and have faces, even if our rational mind knows they aren’t alive. According to Darling, robots have two attributes that affect our psychology in very powerful ways, movement, and physicality, and our brains are hardwired to treat things that move in our physical space as having intentions. For this reason, Darling says, we treat robots as if they are alive.

Robot Ethicist Kate Darling says it is possible that mistreating robots, much like mistreating animals can make humans crueler.
Photo: pcmag.com

Darling references a philosophical argument made by Immanuel Kant for not being cruel to animals. Kant didn’t think animals were capable of suffering the same way humans were, but he nonetheless said we should be kind to animals because treating animals poorly predisposes us to treat humans poorly. In other words, treating animals cruelly would create cruel people who are cruel to each other. Research seems to support the idea that those who are cruel to animals are also more likely to be violent towards people. For this reason, Darling says we should be careful about how we treat lifelike robots.

Says Darling:

We need to ask what does it do to us to be cruel to these things and from a very practical standpoint—and we don’t know the answer to this—but it might literally turn us into crueler humans if we get used to certain behaviors with these lifelike robots.

In essence, it may not matter if Sophia isn’t conscious, or if the concept of identity for a robot is tricky to pin down, or that laws would have to change to accommodate synthetic personhood, because it may still be worth giving humanoid robots some form of legal protection because of the impact mistreating them can have on human psychology.

The AI Risk Topic

At the same event where it was revealed that Sophia had been granted citizenship, Sophia was interviewed by business writer Andrew Sorkin. Sorkin questioned Sophia about the potential dangers of artificial intelligence, which promoted a quip from Sophia that Sorkin had been watching too many Hollywood movies and listening too much to Elon Musk.

Elon Musk, who has frequently warned about the potential dangers that a superintelligent AI could pose, shot back on Twitter, suggesting that the robot’s nature could turn violent if it were given the wrong input to learn from, such as the script from “The Godfather”.

Elon Mush has warned about the dangers of a superintelligent AI.
Photo: futurism.com

Musk has repeatedly warned about the dangers posed by artificial intelligence, and recently joined a group of various artificial intelligence experts and companies in calling for the UN to ban the use of AI in weapons. Musk is also not the only person to urge caution in how AI is used. Other figures that have warned about the dangers of AI include co-founder of Apple Steve Wozniak, physicist Stephen Hawking, philosopher Nick Bostrom, philosopher and neuroscientist Sam Harris, and prominent AI researcher Eliezer Yudkowsky. Like Abass and Bryson, Musk and others are worried that granting rights to technologies that have not yet been proven reliable is a losing proposition.

Regardless of why Saudi Arabia decided to grant Sophia citizenship, it is almost guaranteed that the action will go on to spur discussion about humanity’s relationship with robots and artificial intelligence. Considering how much of an impact these technologies can have, it could easily be that the more we discuss it the better off we’ll be.

Written By
More from Daniel Nelson

Easily Calculate 0.375 As A Fraction In The Simplest Form

It’s possible to calculate 0.375 as a fraction, and put it in...
Read More
Opinions expressed are solely the authors and do not express the views or opinions of Science Trends nor the author's institution.

Leave a Reply

Your email address will not be published. Required fields are marked *