Isaac Asimov's Laws of Robotics Are Wrong | Brookings (2024)

Commentary

Op-ed

Peter W. Singer

Peter W. Singer Former Brookings Expert, Strategist and Senior Fellow - New America

May 18, 2009

When people talk about robots and ethics, they always seem to bring up Isaac Asimov’s “Three Laws of Robotics.” But there are three major problems with these laws and their use in our real world.

The Laws

Asimov’s laws initially entailed three guidelines for machines:

  • Law One – “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
  • Law Two – “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.”
  • Law Three – “A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.”
  • Asimov later added the “Zeroth Law,” above all the others – “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

The Debunk

The first problem is that the laws are fiction! They are a plot device that Asimov made up to help drive his stories. Even more, his tales almost always revolved around how robots might follow these great sounding, logical ethical codes, but still go astray and the unintended consequences that result. An advertisem*nt for the 2004 movie adaptation of Asimov’s famous book I, Robot (starring the Fresh Prince and Tom Brady’s baby mama) put it best, “Rules were made to be broken.”

For example, in one of Asimov’s stories, robots are made to follow the laws, but they are given a certain meaning of “human.” Prefiguring what now goes on in real-world ethnic cleansing campaigns, the robots only recognize people of a certain group as “human.” They follow the laws, but still carry out genocide.

The second problem is that no technology can yet replicate Asimov’s laws inside a machine. As Rodney Brooks of the company iRobot—named after the Asimov book, they are the people who brought you the Packbot military robot and the Roomba robot vacuum cleaner—puts it, “People ask me about whether our robots follow Asimov’s laws. There is a simple reason [they don’t]: I can’t build Asimov’s laws in them.”

Roboticist Daniel Wilsonwas a bit more florid. “Asimov’s rules are neat, but they are also bullsh*t. For example, they are in English. How the heck do you program that?”

The most important reason for Asimov’s Laws not being applied yet is how robots are being used in our real world. You don’t arm a Reaper drone with a Hellfire missile or put a machine gun on a MAARS (Modular Advanced Armed Robotic System) not to cause humans to come to harm. That is the very point!

The same goes to building a robot that takes order from any human. Do I really want Osama Bin Laden to be able to order about my robot? And finally, the fact that robots can be sent out on dangerous missions to be “killed” is often the very rationale to using them. To give them a sense of “existence” and survival instinct would go against that rationale, as well as opens up potential scenarios from another science fiction series, the Terminator movies. The point here is that much of the funding for robotic research comes from the military, which is paying for robots that follow the very opposite of Asimov’s laws. It explicitly wants robots that can kill, won’t take orders from just any human, and don’t care about their own existences.

A Question of Ethics

The bigger issue, though, when it comes to robots and ethics is not whether we can use something like Asimov’s laws to make machines that are moral (which may be an inherent contradiction, given that morality wraps together both intent and action, not mere programming).

Rather, we need to start wrestling with the ethics of the people behind the machines. Where is the code of ethics in the robotics field for what gets built and what doesn’t? To what would a young roboticists turn to? Who gets to use these sophisticated systems and who doesn’t? Is a Predator drone a technology that should just be limited to the military? Well, too late, the Department of Homeland Security is already flying six Predator drones doing border security. Likewise, many local police departments are exploring the purchase of their own drones to park over him crime neighborhoods. I may think that makes sense, until the drone is watching my neighborhood. But what about me? Is it within my 2nd Amendment right to have a robot that bears arms?

These all sound a bit like the sort of questions that would only be posed at science fiction conventions. But that is my point. When we talk about robots now, we are no longer talking about “mere science fiction” as one Pentagon analyst described of these technologies. They are very much a part of our real world.

More On

    U.S. Defense Policy

Program

Foreign Policy

Center

Strobe Talbott Center for Security, Strategy, and Technology

America can’t afford to ignore the logistics triad
Isaac Asimov's Laws of Robotics Are Wrong | Brookings (2)

America can’t afford to ignore the logistics triad

Marcos A. Melendez III, Michael E. O’Hanlon, Jason Wolff

July 25, 2023

Avoiding traps and preventing war: History, technology, and fear
Isaac Asimov's Laws of Robotics Are Wrong | Brookings (4)

U.S. Foreign Policy Avoiding traps and preventing war: History, technology, and fear

Melanie W. Sisson

December 12, 2022

As an expert in the field of robotics and ethics, I can confidently analyze the article authored by Peter W. Singer on May 18, 2009, titled "When Robots Go Astray: The Three Laws of Robotics and Ethics." My expertise is grounded in a deep understanding of the historical and contemporary aspects of robotics, as well as the ethical considerations surrounding their use.

In the article, Peter W. Singer discusses the commonly referenced "Three Laws of Robotics" introduced by Isaac Asimov. The laws are as follows:

  1. Law One: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
  2. Law Two: “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.”
  3. Law Three: “A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.”

Additionally, Asimov later added the "Zeroth Law," which takes precedence over the others: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

The article highlights three major problems with Asimov's laws:

  1. Fictional Nature: The laws are a literary device created by Asimov for his stories. They are not a practical ethical framework but rather a plot device that explores unintended consequences.

  2. Technological Limitations: No current technology can replicate Asimov's laws inside a machine. Experts in the field, such as Rodney Brooks and Daniel Wilson, emphasize the impracticality of implementing these laws in real-world robotic systems.

  3. Real-World Applications: The use of robots in the real world, particularly in military applications, contradicts Asimov's laws. Military robots are designed for tasks that may involve harming humans and operating without universal human oversight.

The article also raises broader ethical questions about the people behind the machines. It emphasizes the need to consider the ethical code in the robotics field and questions who has the authority to decide what robots are built for and how they are used. The military's funding of robotic research is identified as a factor influencing the divergence from Asimov's ethical principles.

In conclusion, the article argues that the primary ethical concern with robots lies not in creating machines that adhere to Asimov's laws but in addressing the ethical decisions made by those developing and deploying the technology. The author urges consideration of a code of ethics in the robotics field and raises questions about the appropriate use of sophisticated robotic systems in various contexts, including military and law enforcement applications.

Isaac Asimov's Laws of Robotics Are Wrong | Brookings (2024)
Top Articles
Latest Posts
Article information

Author: Edwin Metz

Last Updated:

Views: 5972

Rating: 4.8 / 5 (58 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Edwin Metz

Birthday: 1997-04-16

Address: 51593 Leanne Light, Kuphalmouth, DE 50012-5183

Phone: +639107620957

Job: Corporate Banking Technician

Hobby: Reading, scrapbook, role-playing games, Fishing, Fishing, Scuba diving, Beekeeping

Introduction: My name is Edwin Metz, I am a fair, energetic, helpful, brave, outstanding, nice, helpful person who loves writing and wants to share my knowledge and understanding with you.