Artificial Intelligence and ethics: fears and controversy

robot-arm-law

Artificial intelligence: fears and controversy

July 28th, 2015, during the 24th IJCAIan open letter sent to the conference was presented. Thanks to some illustrious names among the signatories1, this letter would quickly fly all over the world, fueling discussions and doubts.

The letter in the essence is a warning to the use of AI augmented weapons, expressing the fear that the development of these weapons would lead to a race for this type of weaponry. Drones are very cheap nowadays (you can purchase them almost everywhere), and weapons have always been accessible. Considering that research on AI is progressing at steep pace every day, we might get accessible autonomous weapons quite soon.

However, in the last 20 years, all sort of dystopian sci-fi movies have been spreading unrealistic expectations. In the end, people may wonder if we will really face risks like the one depicted by the letter.

Apocalyptic future with autonomous weapons (Call of Duty – Activision)

The first point to note is that when with “Artificial Intelligence” we aren’t talking just about a new tool. It is not just another technological revolution. We are talking about an epochal turnaround, to a future where such autonomous systems might replace us altogether.

Autonomous systems, provided with sophisticated AI mean it could bypass human control. Once we are out of the loop, we enter a new world, where “responsibilities” are hard to understand.

But what is this Artificial Intelligence?

To begin with, we can borrow definitions from various sources:

Encyclopaedia Britannica: “Artificial intelligence (AI), the ability of a digital computer or computer-controlled Robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.”

The Turing Test

Already back in 1950, Alan Turing proposed a test to address a new type of problem: “Can machines think?”2. In short, the idea he proposed was that we stick on observing the behavior of the machine. If the human observer cannot distinguish it from that of a human, we can say that the machine can “think”.

turing test
Turing Test: the observer tries to find out via chat (written) if the interlocutor is a human or a machine

This black-box approach (see fig.) implies that the process underlying a decision or a behavior is irrelevant. It is the outcome (or the look of it) that counts.

Is it correct to base our judgments merely on the appearances? Actually, as it may sound weird at first glance, we should keep in mind that human interaction is almost entirely based on appearances (in fact, it’s all we can see),  filtered by personal prejudices and expectations. The proof is that there are both cases of “non-intelligent” software which managed to pass the test3, and also cases of humans who failed it4.

This kind of approach takes us back to the question of what intelligence really is and how (if ever) it is possible to measure it.

Development

Ok, so the question remains: are the open letter concerns justified? Should we be worrying?

In order to rationalize a justification, we need more advances in two scenarios:

  • Support: We talk about robots or drones, that we can employ with precision in the military, and are relatively cheap.
  • An Artificial Intelligence able to control and make decisions autonomously.

Robotics

It is in the public domain that the US defense ministry has founded M3, a program with two main functions:

  • “Create a significantly improved scientific framework for the rapid design and fabrication of robot systems”
  • “Greatly enhance robot mobility and manipulation in natural environments.”
  • “Significantly improve robot capabilities through fundamentally new approaches to the engineering of better design tools, fabrication methods, and control algorithms. The M3 program covers scientific advancement across four tracks: design tools, fabrication methodologies, control methods, and technology demonstration prototypes.”

Given the sponsor, it is easy enough to imagine that the ultimate goals are essentially military.

To understand the state of progress within this program, just take a look at the Boston Dynamics website (acquired by Google in 2013, then sold to SoftBank last July) to see autonomous robots running over 30 mph, to remain balanced even after strong pushes, even on ice.

Military applications

We have seen how military research is evolving very rapidly in the development of robots and androids that can move in relative autonomy5. DARPA itself is founding this research, so it is likely that the objective is deploying them in field operations.

Robots of this kind require an investment of hundreds of millions and are obviously outside the reach of most. However, even their use by regular armies raises issues that are yet to be tackled.

Drones: Low-cost lethal weapons?

Remote-controlled drones are no longer just a curiosity: companies such as Parrot and Dji have now invaded the market with their quadrotors, which are practically purchasable even in some toy shops6.

desert-wolf-drone
Drone made by Desert Wolf, equipped with paint-ball riot guns

These drones have by now reached high levels of stabilization, and they are sophisticated enough to fly in coordinated groups. Furthermore, they are also virtually impossible to detect and …they are programmable.

It does not take a strong imagination to think about adding weapons on these drones. In fact, there are already on the internet videos of drones equipped with anti-riot weapons, or even with flamethrowers (!).

The Autonomy issue

So, Artificial Intelligence is not science-fiction, but it’s already very part of our lives. It is basically everywhere: smartphone voice recognition, face recognition from video and photographs7, medical diagnosis, video games…

Hawking and Musk’s open letter has obviously raised vibrant discussions, from the denial that we are at risk of any “race” for autonomous weapons, to the more general accusations of instigating unnecessary panic.

The topic is so sensitive that is that institutions like MIRI are concerned with basic mathematical research to make sure that “smarter than humans” systems have a positive impact. Elon Musk himself donates 10 million dollars to “keep artificial intelligence beneficial“. It is easy to see where this concern comes from, especially considering that there is software that can make tactical decision-making in the military which has already been a reality for some time now.

Creativity and Brain Control

The computer superiority has been expanding from brutal processing speed to activities we liked to think was our prerogative. There are already systems with better performance than humans in a wide gamma of tasks like chess, Jeopardy!8, data mining, and demonstration of theorems. However, there are already AI applications that are involved with “creative” tasks, like the autonomous production of music and narratives9. Also, it is now a well-known fact that both Facebook and Google have developed neural networks10 that can automatically generate artistic images, and systems such as AARON can even paint.

Advances in brain control can also be pertinent to our AI discussion. Already in the 1970s, José Delgado, in his famous experiment in Cordoba’s bullfight, was able to remote control animal behavior through biochips11.

Furthermore, Nicolelis’s laboratory succeeded in implementing the first Brain-Machine interface, where a monkey was able to control a robot with using just its thought12. Not only:  they also managed to realize the first “brain-to-brain” interface through which two rats succeeded to share complex tactile and motor experiences.

Notes

1. E.g. Stephen Hawking and Elon Musk.

2. An interesting discussion on Turing’s work is also available on The Alan Turing Internet Scrapbook.

3. A common trick for a machine to pass the test was to smartly rephrase the question and bouncing it back.

4. Means those humans were mistaken for machines by the observers.

5. As of adapting to environmental features.

6. You can see here one of the many charts of commercial drones to get an idea.

7. Also used by Facebook for cross-matching data for advertising purpose.

8. Jeopardy! is an American TV show. The host presents the competitors with general culture clues in the form of answers and asks them to formulate the right question. Despite this kind of task seemingly requires a contextualization ability we thought typical of humans, IBM’s Watson has been able to consistently beat human champions since 2010.

9. The Associated Press already publishes thousands of articles like this, generated entirely automatically with Automated Insight’s Wordsmith.

10. Denton, E. et al. “Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks“, arXiv:1506.05751v1.

11. J. Delgado (1971) “Physical Control of the Mind — Toward a Psychocivilized Society”

12. P.J. Ifft, S. SHokur, Zheng Li, M.A. Lebedev, M.A.L. Nicolelis (2013) “A Brain-Machine Interface Enables Bimanual Arm Movements in Monkeys”

 

About The Author

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: