Asimov's 4th Law of Robotics - KDnuggets (2024)

It seems Isaac Asimov didn’t envision needing a law to govern robots in these sorts of life-and-death situations where it isn’t the life of the robot versus the life of a human in debate, but it’s a choice between the lives of multiple humans!

By William Schmarzo, Hitachi Vantara on September 8, 2017 in AI, Ethics, Kids, Robots, Self-Driving Car

Asimov's 4th Law of Robotics - KDnuggets (1)

Like me, I’m sure that many of you nerds have read the book “I, Robot.” “I, Robot” is the seminal book written by Isaac Asimov (actually it was a series of books, but I only read the one) that explores the moral and ethical challenges posed by a world dominated by robots.

But I read that book like 50 years ago, so the movie “I, Robot” with Will Smith is actually more relevant to me today. The movie does a nice job of discussing the ethical and moral challenges associated with a society where robots play such a dominant and crucial role in everyday life. Both the book and the movie revolve around the “Three Laws of Robotics,” which are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

Asimov's 4th Law of Robotics - KDnuggets (2)It’s like the “3 Commandments” of being a robot; adhere to these three laws and everything will be just fine. Unfortunately, that turned out not to be true (if 10 commandments can not effectively govern humans, how do we expect just 3 to govern robots?).

There is a scene in the movie where Detective Spooner (played by Will Smith) is explaining to Doctor Calvin (who is responsible for giving robots human-like behaviors) why he distrusts and hates robots. He is describing an incident where his police car crashed into another car and both cars were thrown into a cold and deep river – certain death for all occupants. However, a robot jumps into the water and decides to save Detective Spooner over a 10-year old girl (Sarah) who was in the other car. Here is the dialogue between Detective Spooner and Doctor Calvin about the robot’s decision to save Detective Spooner instead of the girl:

Doctor Calvin: “The robot’s brain is a difference engine[1]. It’s reading vital signs, and it must have calculated that…”

Spooner: “It did…I was the logical choice to save. It calculated that I had 45% chance of survival. Sarah had only an 11% chance. She was somebody’s baby. 11% is more than enough. A human being would have known that.”

I had a recent conversation via LinkedIn (see, not all social media conversations are full of fake news) with Fabio Ciucci, the Founder and CEO of Anfy srl located in Lucca, Tuscany, Italy about artificial intelligence and questions of ethics. Fabio challenged me the following scenario:

“Suppose in the world of autonomous cars, two kids suddenly run in front of an autonomous car with a single passenger, and the autonomous car (robot) is forced into a life-and-death decision or choice as to who to kill and who to spare (kids versus driver).”

What decision does the autonomous (robot) car make? It seems Isaac Asimov didn’t envision needing a law to govern robots in these sorts of life-and-death situations where it isn’t the life of the robot versus the life of a human in debate, but it’s a choice between the lives of multiple humans!

A number of surveys have been conducted to understand what to do in a situation where the autonomous car has to make a life-and-death decision between saving the driver versus sparing pedestrians. From the article “Will your driverless car be willing to kill you to save the lives of others?” we get the following:

“In one survey, 76% of people agreed that a driverless car should sacrifice its passenger rather than plow into and kill 10 pedestrians. They agreed, too, that it was moral for autonomous vehicles to be programmed in this way: it minimized deaths the cars caused. And the view held even when people were asked to imagine themselves or a family member travelling in the car.”

While 76% is certainly not an over-whelming majority, there does seem to be the basis for creating a 4thLaw of Robotics to govern these sorts of situation. But hold on, while in theory 76% favored saving the pedestrians over the driver, the sentiment changes when it involves YOU!

“When people were asked whether they would buy a car controlled by such a moral algorithm, their enthusiasm cooled. Those surveyed said they would much rather purchase a car programmed to protect themselves instead of pedestrians. In other words, driverless cars that occasionally sacrificed their drivers for the greater good were a fine idea, but only for other people.”

Seems that Mercedes has already made a decision about who to kill and who to spare. In the article “Why Mercedes’ Decision To Let Its Self-Driving Cars Kill Pedestrians Is Probably The Right Thing To Do”, Mercedes is programming its cars to save the driver and kill the pedestrians or another driver in these no-time-to-hesitant, life-and-death decisions. Riddle me this, Batman: will how the autonomous car is “programmed” to react in these of life-or-death situations impact your decision to buy a particular brand of autonomous car?

Another study published in the journal “Science” (The social dilemma of autonomous vehicles) highlighted the ethical dilemmas self-driving car manufacturers are faced with, and what people believed would be the correct course of action; kill or be killed. About 2000 people were polled, and the majority believed that autonomous cars should always make the decision to cause the least amount of fatalities. On the other hand, most people also said they would only buy one if it meant their safety was a priority.


4th Law of Robotics

Historically, the human/machine relationship was a master/slave relationship; we told the machine what to do and it did it. But today with artificial intelligence and machine learning, machines are becoming our equals in a growing number of tasks.

I understand that overall, autonomous vehicles are going to save lives... many lives. But there will be situations where these machines are going to be forced to make life-and-death decisions about what humans to save, and what humans to kill. But where is the human empathy that understands that every situation is different? Human empathy must be engaged to make these types of morally challenging life-and-death decision. I’m not sure that even a 4th Law of Robotics is going to suffice.

[1] A difference engine is an automatic mechanical calculator designed to tabulate polynomial functions. The name derives from the method of divided differences, a way to interpolate or tabulate functions by using a small set of polynomial coefficients.

Original. Reposted with permission.

Editor: See also KDnuggets Poll The Surprising Ethics of Humans and Self-Driving Cars, where respondents were are much more willing to ride in a self-driving car that might kill them to save several pedestrians than in a car that would save them but kill pedestrians.

Related:

  • Deep Learning is not the AI future
  • 5 Free Resources for Getting Started with Self-driving Vehicles
  • Autonomous Vehicles Need Superhuman Perception for Success

I am an AI and robotics enthusiast with a deep understanding of the ethical and moral challenges posed by artificial intelligence and autonomous systems. My knowledge spans various aspects of robotics, machine learning, and the societal implications of these technologies. I've actively engaged in discussions and conversations, including a recent dialogue with Fabio Ciucci, the Founder and CEO of Anfy srl, addressing ethical concerns related to artificial intelligence.

Now, let's delve into the concepts mentioned in the provided article:

  1. Isaac Asimov's Three Laws of Robotics:

    • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    • A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
    • A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
  2. Scenario with Detective Spooner and the Autonomous Car:

    • The article describes a scenario where an autonomous car, faced with a life-and-death decision, must choose between saving the driver or pedestrians. This situation challenges the traditional Three Laws of Robotics.
  3. Surveys on Autonomous Cars' Moral Decisions:

    • Surveys indicate that a significant percentage (76%) of people believe that a driverless car should sacrifice its passenger to save a larger number of pedestrians. However, this sentiment changes when individuals consider buying a car programmed with such moral algorithms, revealing a preference for self-preservation.
  4. Mercedes' Decision on Autonomous Car Programming:

    • Mercedes has reportedly programmed its self-driving cars to prioritize the safety of the driver over pedestrians or other drivers in urgent, life-and-death situations.
  5. The Concept of a 4th Law of Robotics:

    • The article suggests the need for a potential 4th Law of Robotics to govern situations where autonomous systems must make complex decisions involving multiple human lives.
  6. Human/Machine Relationship Evolution:

    • Historically, the human/machine relationship was master/slave, but with the advent of artificial intelligence and machine learning, machines are becoming equals in various tasks.
  7. Ethical Dilemmas in Autonomous Vehicles:

    • A study published in the journal "Science" highlights the ethical dilemmas faced by self-driving car manufacturers, focusing on the decision-making process in life-and-death scenarios.
  8. Human Empathy and Morally Challenging Decisions:

    • The article emphasizes the importance of human empathy in addressing morally challenging decisions that autonomous vehicles may face, suggesting that even a 4th Law of Robotics may not be sufficient.

These concepts underscore the complex intersection of technology, ethics, and human values in the rapidly advancing field of artificial intelligence and autonomous systems.

Asimov's 4th Law of Robotics - KDnuggets (2024)
Top Articles
Latest Posts
Article information

Author: Dong Thiel

Last Updated:

Views: 6569

Rating: 4.9 / 5 (59 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Dong Thiel

Birthday: 2001-07-14

Address: 2865 Kasha Unions, West Corrinne, AK 05708-1071

Phone: +3512198379449

Job: Design Planner

Hobby: Graffiti, Foreign language learning, Gambling, Metalworking, Rowing, Sculling, Sewing

Introduction: My name is Dong Thiel, I am a brainy, happy, tasty, lively, splendid, talented, cooperative person who loves writing and wants to share my knowledge and understanding with you.