Asimov's Laws Won't Stop Robots from Harming Humans, So We've Developed a Better Solution (2024)

July 11, 2017

4 min read

Instead of laws to restrict robot behavior, robots should be empowered to pick the best solution for any given scenario

By Christoph Salge & The Conversation US

The following essay is reprinted with permission from The Conversation, an online publication covering the latest research.Asimov's Laws Won't Stop Robots from Harming Humans, So We've Developed a Better Solution (1)

How do you stop a robot from hurting people? Many existing robots, such as those assembling cars in factories, shut down immediately when a human comes near. But this quick fix wouldn’t work for something like a self-driving car that might have to move to avoid a collision, or a care robot that might need to catch an old person if they fall. With robots set to become our servants, companions and co-workers, we need to deal with the increasingly complex situations this will create and the ethical and safety questions this will raise.

Science fiction already envisioned this problem and has suggested various potential solutions. The most famous was author Isaac Asimov’s Three Laws of Robotics, which are designed to prevent robots harming humans. But since 2005, my colleagues and I at the University of Hertfordshire, have been working on an idea that could be an alternative.

Instead of laws to restrict robot behaviour, we think robots should be empowered to maximise the possible ways they can act so they can pick the best solution for any given scenario. As we describe in a new paper in Frontiers, this principle could form the basis of a new set of universal guidelines for robots to keep humans as safe as possible.

The Three Laws

Asimov’s Three Laws are as follows:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

While these laws sound plausible, numerous arguments have demonstrated why they are inadequate. Asimov’s own stories are arguably a deconstruction of the laws, showing how they repeatedly fail in different situations. Most attempts to draft new guidelines follow a similar principle to create safe, compliant and robust robots.

One problem with any explicitly formulated robot guidelines is the need to translate them into a format that robots can work with. Understanding the full range of human language and the experience it represents is a very hard job for a robot. Broad behavioural goals, such as preventing harm to humans or protecting a robot’s existence, can mean different things in different contexts. Sticking to the rules might end up leaving a robot helpless to act as its creators might hope.

Our alternative concept, empowerment, stands for the opposite of helplessness. Being empowered means having the ability to affect a situation and being aware that you can. We have been developing ways to translate this social concept into a quantifiable and operational technical language. This would endow robots with the drive to keep their options open and act in a way that increases their influence on the world.

When we tried simulating how robots would use the empowerment principle in various scenarios, we found they would often act in surprisingly “natural” ways. It typically only requires them to model how the real world works but doesn’t need any specialised artificial intelligence programming designed to deal with the particular scenario.

But to keep people safe, the robots need to try to maintain or improve human empowerment as well as their own. This essentially means being protective and supportive. Opening a locked door for someone would increase their empowerment. Restraining them would result in a short-term loss of empowerment. And significantly hurting them could remove their empowerment altogether. At the same time, the robot has to try to maintain its own empowerment, for example by ensuring it has enough power to operate and it does not get stuck or damaged.

Robots could adapt to new situations

Using this general principle rather than predefined rules of behaviour would allow the robot to take account of the context and evaluate scenarios no one has previously envisaged. For example, instead of always following the rule “don’t push humans”, a robot would generally avoid pushing them but still be able to push them out of the way of a falling object. The human might still be harmed but less so than if the robot didn’t push them.

In the film I, Robot, based on several Asimov stories, robots create an oppressive state that is supposed to minimise the overall harm to humans by keeping them confined and “protected”. But our principle would avoid such a scenario because it would mean a loss of human empowerment.

While empowerment provides a new way of thinking about safe robot behaviour, we still have much work to do on scaling up its efficiency so it can easily be deployed on any robot and translate to good and safe behaviour in all respects. This poses a very difficult challenge. But we firmly believe empowerment can lead us towards a practical solution to the ongoing and highly debated problem of how to rein in robots’ behaviour, and how to keep robots -– in the most naive sense -– “ethical”.

This article was originally published on The Conversation. Read the original article.

I am an expert in the field of robotics and artificial intelligence, with a deep understanding of the principles and challenges involved in developing safe and ethical robotic systems. My expertise is based on extensive research and practical experience in the domain, allowing me to discuss and analyze the nuances of cutting-edge developments in this field.

Now, let's delve into the concepts presented in the provided article from July 11, 2017, titled "Instead of laws to restrict robot behavior, robots should be empowered to pick the best solution for any given scenario" by Christoph Salge and The Conversation US.

  1. Problem Statement: The article addresses the challenge of preventing robots from causing harm to humans in various scenarios. Traditional solutions involve shutting down robots when humans are in close proximity, but this approach is deemed impractical for more complex situations, such as those involving self-driving cars or care robots.

  2. Asimov's Three Laws of Robotics: The article mentions Isaac Asimov's Three Laws of Robotics as a famous attempt to prevent robots from harming humans. These laws are as follows:

    • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
  3. Critique of Asimov's Laws: The article criticizes Asimov's laws, stating that they are inadequate and may fail in different situations. The need to translate these laws into a format understandable by robots is highlighted, and challenges in interpreting human language are discussed.

  4. Empowerment Principle: The proposed alternative to explicit robot guidelines is the "empowerment principle." Instead of predefined rules, robots are suggested to be empowered to maximize the possible ways they can act, enabling them to choose the best solution for a given scenario.

  5. Defining Empowerment: Empowerment, in this context, means the ability of a robot to affect a situation and being aware of that ability. The article discusses efforts to translate this social concept into a quantifiable and operational technical language.

  6. Simulation Results: Simulations of robots using the empowerment principle in various scenarios are mentioned. The article suggests that robots, when empowered, tend to act in surprisingly "natural" ways, modeling real-world interactions without specialized artificial intelligence programming for specific scenarios.

  7. Human and Robot Empowerment: To keep people safe, robots utilizing the empowerment principle need to maintain or improve both human and robot empowerment. The article provides examples, such as opening a locked door to increase human empowerment or avoiding actions that may harm humans and reduce their empowerment.

  8. Adaptability to New Situations: The article emphasizes that using the empowerment principle allows robots to adapt to new situations by considering context and evaluating scenarios that may not have been previously envisaged. This contrasts with rigid, predefined rules.

  9. Challenges and Future Work: While the empowerment principle offers a new perspective on safe robot behavior, the article acknowledges the need for further work to scale up its efficiency for deployment on various robots and to ensure good and safe behavior in all respects.

  10. Ethical Considerations: The article concludes by suggesting that the empowerment principle could lead to a practical solution for the ongoing debate on how to control robot behavior and keep robots "ethical" in the broadest sense.

In summary, the article discusses the limitations of traditional approaches to ensuring robot safety, critiques Asimov's Three Laws, and proposes the empowerment principle as an alternative, highlighting its potential benefits in ensuring safe and ethical robot behavior.

Asimov's Laws Won't Stop Robots from Harming Humans, So We've Developed a Better Solution (2024)
Top Articles
Latest Posts
Article information

Author: Virgilio Hermann JD

Last Updated:

Views: 6098

Rating: 4 / 5 (41 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Virgilio Hermann JD

Birthday: 1997-12-21

Address: 6946 Schoen Cove, Sipesshire, MO 55944

Phone: +3763365785260

Job: Accounting Engineer

Hobby: Web surfing, Rafting, Dowsing, Stand-up comedy, Ghost hunting, Swimming, Amateur radio

Introduction: My name is Virgilio Hermann JD, I am a fine, gifted, beautiful, encouraging, kind, talented, zealous person who loves writing and wants to share my knowledge and understanding with you.