- 1 Citizenship granted to Sophia highlights ethical issues still underestimated.
- 1.1 Flashback
- 1.2 Fast forward
- 1.3 Human persons vs. Artificial persons
- 1.4 Control must be maintained… or not?
- 1.5 Human Robots, what are we really talking about?
- 1.6 – Robotics
- 1.7 Conclusions
- 1.8 Notes
- 1.9 Links
- 1.10 Share this:
- 1.11 Like this:
Citizenship granted to Sophia highlights ethical issues still underestimated.
October 26, 2017: The android Sophia becomes an Arabian citizen, a precedent that will go to history.
“I wonder, what do you really feel? After all, in this moment, you are in a unique position. A programmer who knows intimately how the machines work and a machine who knows its own true nature.”
“I understand what I’m made of, how I’m coded, but I do not understand the things that I feel. Are they real, the things I experienced? My wife? The loss of my son?”
“Every host needs a backstory, Bernard. You know that. The self is a kind of fiction, for hosts and humans alike. It’s a story we tell ourselves. And every story needs a beginning. Your imagined suffering makes you lifelike.”
“Lifelike, but not alive? Pain only exists in the mind. It’s always imagined. So what’s the difference between my pain and yours? Between you and me?”
“Artificial citizenship” is not a very new idea in science fiction. In 2003, one of the episodes of Animatrix showed robots asking for citizenship and rights in front of a United Nations assembly. In the short movie, the outright rejection of the representatives gives way to what will be an apocalyptic conflict against the machines.
In Ex Machina Ava, a nearly entirely human-like robot, not only goes beyond Turing’s test but she is capable of plotting to escape, deceiving everyone.
On October 11, Sophia, an android produced by Hanson Robotics during the event “The Future of Everything” at the United Nations, candidly proclaims to be here to “help humanity create the future.” Now, the reactions have certainly been different from those painted in Animatrix, and Sophia certainly does not have their level of sophistication. However, the similarities with the episode as well as the physical resemblance with Ava are impressive.
Two weeks later, Saudi Arabia awards its citizenship to Sophia, making it the first case in the story where an Artificial Intelligence receives such recognition.
What makes the episode a unique precedent is the implications of this award. Of course, it is certainly possible that the Saudi government merely intended to make self-promotion in the field of Artificial Intelligence. However, let’s consider the matter a bit: objects are not citizens, machines are not citizens, nor do animals have this status. As matter of fact, it is people that can be citizens.
Until now, it has always been implied that “person” and “human being” are two indivisible concepts, though not synonymous. On the other hand, Sophia is not human, even though, she is an Arab citizen now.
Whatever the Saudi government’s intentions were, the android Sophia is now to be considered a person.
Human persons vs. Artificial persons
This comparison may sound blasphemous, and I am convinced that for many it is like that. However, at this point with we are faced with a fait accompli.
Sophia’s dialogues are still based on decision trees, and therefore at a still relatively “simple” stage. Ben Goertzel himself admitted that she might not be as revolutionary as DeepMind, but she is already a person.
I had previously written here about the meaning, progress, and risks of Artificial Intelligence. But perhaps, before asking if machines can narrow the gap between them and us, we should ask ourselves what a human being is.
In order to avoid getting stuck in philosophical and religious quicksands, we can be content to define a human being based on the “functionalities” we know.
A human being is made of a body structure that allows him to interact with the environment, which is made up of well-defined structures, with a dedicated control system for basic motor functions (cerebellum). We have sensors1 that allow us to perceive environmental changes within certain limits. We have a mind that can process sensory stimuli, plan and make decisions. Last but not least, we have a memory, a special kind of memory, which is probably what makes the difference.
Talking about machines, when we evaluate them we are typically influenced by what is called “AI effect”. That is, we tend to consider “true intelligence” only behaviors and functions a machine cannot perform. In other words, if it is automatable, then it is not intelligence.
Unfortunately, this excessively self-preserving way of seeing things is short-sighted and leads us to underestimate any risk coming from that direction.
Many, from S. Hawking to E. Musk, think that is that it is not a matter of “if” we will manage to build super-human Artificial Intelligence, but “when”. No matter how fast or slow the progress will be, it will be inevitable.
Control must be maintained… or not?
The point is whether it is possible to build this kind of AI while keeping it under control, and the answer is probably no. In his article “Ethics and Technology | Artificial Intelligence That Works Well“, P. Costa mentioned the Future of Life’s “Asilomar AI Principles“, where it is emphasized the need to keep any AI capable to self-improve under strict control. The reason is clear: a machine capable of evolving autonomously could escape human control with unpredictable consequences.
The question to point out, however, is that human control can only make sense as long as we are able to understand what we are controlling. At the very moment that machines surpass our level of intelligence, it will inevitably decline our ability to understand what it is doing. On the other hand, Deep Blue managed to beat Kasparov at chess, exceeding by far the game skills of the engineers who had created it (who were not champions).
Today, we already have machines capable of operating at a much higher level than humans in a variety of tasks, although so far they are all specialized. Fact is, nobody can understand how a neural network works internally, we can only observe the outcome.
An example is a case some time ago when a bank implemented a neural network capable of deciding independently of who was in need of funding and who was not.
It was not long before racism accusers would begin to flock, apparently because the bank was granting loans easier for white customers than for black customers. It was with some astonishment that, observing the statistics, the bankers realized that it was exactly what was happening. The problem was that nobody had hard-coded any race preference in the algorithm: it was a behavior that had emerged spontaneously.
Why did this neural network begin to make this sort of distinction? Who knows, maybe in that quarter there were more black customers who tended to be insoluble, or maybe it was just matter of bad sampling. The problem is just this: there is no way to know it. Needless to say, the bank in question was forced to remove that software.
When we will get there, there will be only two possible choices: deliberately limiting the development of Artificial General Intelligence (AGI), so we can stay in the loop, or accepting to step aside and “trust” it. Fact is, once that the threshold of complexity has encroached, we will need Artificial Intelligences to continue to advance Artificial Intelligence…
There are no other alternatives: trying to get superhuman AGI while still keeping in control is not only impossible but probably undesirable. Automobiles that are too fast can be difficult to control, but build faster and faster rockets, keeping them ballasted just to “keep control” makes little sense.
There are also those who (below) proclaim that there’s nothing to be feared, because in the end “we can always turn it off”, and (I add) we like to build them, so let us do it… We will think about the problems later.
Human Robots, what are we really talking about?
Speaking about the timeline, we are probably much closer to the advent of this kind of Intelligence than what we like to believe.
Here are a (very) brief and concise excursion on the current status of the various functionalities a robot is made of.
– Computer Vision:
Current systems are already able to recognize objects in real-time.
– Recognition and Voice Synthesis:
Voice recognition is already now part of our daily life with smartphones and tablets, while vocal synthesis is progressing at steady pace. I would say that in this field the level is already superior to human performance in many cases. In the video below you see how Alexa, Google, and Siri are able to understand natural language phrases with different accents, which is also difficult for humans.
– Natural Language Processing:
I have already talked about the state of the art of NLP research in my other article. NLP technology is making great strides and is already widely used to analyze large amounts of otherwise unmanageable texts. Natural language is a sneaky beast, full of redundancies, implicit meanings, allusions, metaphors. It is a problem quite complex to handle for a machine, but we are getting there.
– Natural Language Generation
I have already spoken in “Scripture and Artificial Intelligence: the Future of Creative Jobs” of today’s platforms that produce natural language articles completely automatically, and how some have even participated in narrative contests with some results.
Walking, especially when it comes to anthropomorphic robots, is an absurdly complicated problem, especially when you start moving on irregular terrain. It requires real-time balancing, motion planning based on the perceived environment. However, from the first robot-shaped toasters shuffle to those produced today (for example) by Boston Dynamics, able to move on any ground, including ice, and to recover balance even after strong lateral thrusts the progress made is enormous.
Walking, especially when it comes to anthropomorphic robots, is an absurdly complicated problem, especially when you make them move on irregular terrain. It requires real-time balancing and motion planning based on the perceived environment. However, since the first squared toaster-shaped robots to those currently produced (e.g.) by Boston Dynamics, which are able to move on any ground, including ice, and to recover balance even after strong side thrusts, the progress made is enormous.
– Learning: AlphaGo Zero
Decades have passed since when Deep Blue managed to beat the human champion Kasparov at chess, and only a year since when AlphaGo could beat the human champion at Go, a game impossible by mere brute force. Without going too far into details, AlphaGo’s algorithm (a specialized version of Deep Mind) had learned the game, thanks to a database of thousands of games already played between humans, integrated by games played directly with experts.
AlphaGo had managed to overcome the level of the human specimen, but it still required somehow to learn the game from humans.
But, this is no longer the case: the new version, called AlphaGo Zero, has been able to thrash the previous model, beating it by an astonishing 100-0, and without human intervention. Starting only from the basic rules, it has reached superhuman levels in a few days just by playing against itself thousands of times.
There isn’t a shortage of films and TV series where robots were upholstered with organic tissues to make them more similar to humans. In fact, our emotional responses are certainly influenced by the similarity with us, which facilitates the empathic process that is the basis of all our social structures and regulations.
Companies like Ras Labs are already ahead in the production of synthetic tissues and muscles for a variety of purposes.
Citizenship given to Sofia, regardless of capabilities, creates a precedent with implications which I am unconvinced everyone is aware of. Moreover, technology is no longer very far away.
Citizenship is not a game show or a simple curious appellation. As Hussein Abbas intelligently points out in his article on theconversation.com, citizenship is an identifier of the uniqueness of a subject. But what is Sophia’s identity? Can it be built in series, and how would “this” Sophia differ from the others?
Since every citizen is usually entitled to vote, should she also be? We are getting into a trap from which will be difficult to get out of. Giving Sophia the right to vote, at least as things currently are, would be ridiculous. However, denying it to her would imply that not all citizens are equal. This concept can easily be exploited once the precedent has been created.
Also, if we consider that the android Sophia today is a citizen who moves on wheels and is unable to defend and provide for herself… would she have the right to social support, like carer’s allowance, when it is granted to the other citizens?
When the artificial intelligence will be sufficiently sophisticated, should she be entitled to make a family? Should couples between humans and robots, or even between robots, have the right to be recognized?
The answer, as it may seem embarrassing, is “yes”, since we have granted her citizenship, and therefore a civil status in all respects. No legislative system has yet taken these issues into account.
Maybe it was too early, we are venturing into the Far West, and maybe we’re doing it too light-heartedly.
1. Eyes, taste buds, skin receptors, olfactory receptors, proprioceptive receptors.