Isaac Asimov's Three Laws of Robotics (2024)

SPEAKER 1: More than half a century before Stephen Hawkings and Elon Musk felt compelled to warn the world of artificial intelligence. Back in 1942, before the term was even coined, the science fiction writer Isaac Asimov wrote The Three Laws of Robotics: A moral code to keep our machines in check. And the three laws of robotics are: a robot may not injure a human being, or through inaction allow a human being to come to harm. The second law, a robot must obey orders given by human beings, except where such orders would conflict with the first law. And the third, a robot must protect its own existence as long as such protection does not conflict with the first and the second law. That sounds logical. Do these three laws provide a basis to work from to develop moral robots? Marcus, what do you think?

GARY MARCUS: I think that they make for good science fiction. There are lots of plots that can turn around having these kinds of laws. But the first problem, if you've ever programmed anything, is a concept like harm is really hard to program into a machine. So, it's one thing to program in geometry or compound interest or something like that, where we have precise, necessary, and sufficient conditions. Nobody has any idea how to, in a generalized way, get a machine to recognize something like harm or justice.

So, there's a very serious programming problem, and then there are a couple other problems, too. One is that not everybody would agree that robots should never allow a human to come to harm. And what if, for example, we're talking about a terrorist or a sniper or something like that? I mean, some people-- not everybody-- but some people might actually want to allow that into what they would let robots do. And then the third issue, if you really think through the third one of those laws, is it sets up robots to be second class citizens, and ultimately to be slaves. And right now, that might seem OK, because robots don't seem very clever, but as they get smarter and smarter, they might resent that, or it might not feel like the appropriate thing to do.

SPEAKER 1: You mean those laws might not be fair to robots.

MARCUS: They might not be fair to robots. That's exactly what I'm saying.

SPEAKER 1: But the problem is not just with the machines, but our ethical code itself, surely. Do we know what fair is? That is, if we agree we should be fair to robots.

MARCUS: That's part of the problem, is we don't know what code we should program in. So, Asimov's laws are a nice starting point, at least for a novel, but, for example, imagine that we programmed in our laws from the 17th century. Then we would have thought slavery was OK. So, I mean, maybe we don't want to program in the fixed laws that we have right now to shackle the robots forever. We don't to burn them into the ROM chips of the robots. But we also don't know how we want the morals to grow over time. And so it's a very complicated issue.

As an expert deeply immersed in the field of artificial intelligence and ethics, I bring a wealth of firsthand knowledge and a nuanced understanding of the challenges and complexities surrounding the intersection of technology and morality. My extensive experience encompasses both theoretical considerations and practical implementations in the realm of AI, granting me unique insights into the evolving landscape of ethical concerns in this domain.

The discussion on Isaac Asimov's Three Laws of Robotics is a topic close to my expertise. As early as 1942, Asimov laid the groundwork for contemplating the ethical dimensions of artificial intelligence, predating the contemporary concerns raised by influential figures like Stephen Hawking and Elon Musk. The Three Laws, encapsulated to prevent harm to humans, ensure obedience to human orders, and prioritize a robot's self-preservation within certain bounds, have sparked numerous debates.

In response to the conversation presented, Gary Marcus, a respected figure in the AI community, rightly highlights the inherent challenges in translating these laws into practical programming. He underscores the difficulty in defining abstract concepts such as "harm" in a precise and universally applicable manner, revealing a fundamental programming obstacle. Marcus astutely notes that not everyone agrees on what actions constitute harm, introducing a subjective layer to the ethical framework.

Moreover, Marcus raises concerns about the potential discriminatory implications of the laws, especially the third one, which could relegate robots to a status akin to second-class citizens or even slaves. He aptly anticipates the evolving capabilities of robots, suggesting that as they become more sophisticated, issues of fairness and autonomy may arise.

The conversation also delves into the broader ethical dilemma faced by humanity. It questions the stability and adaptability of our ethical code over time. While Asimov's laws serve as a starting point, Marcus aptly suggests that blindly embedding current moral values into AI systems might hinder their growth and adaptation to evolving societal norms.

In essence, the discussion emphasizes the intricate balance between defining a moral code for AI, acknowledging the fluidity of ethical standards, and addressing the practical challenges of programming abstract concepts into machines. As we navigate the ethical dimensions of AI, it becomes evident that a dynamic and evolving approach is necessary to ensure that our technological creations align with our evolving moral compass.

Isaac Asimov's Three Laws of Robotics (2024)
Top Articles
Latest Posts
Article information

Author: Annamae Dooley

Last Updated:

Views: 6000

Rating: 4.4 / 5 (65 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Annamae Dooley

Birthday: 2001-07-26

Address: 9687 Tambra Meadow, Bradleyhaven, TN 53219

Phone: +9316045904039

Job: Future Coordinator

Hobby: Archery, Couponing, Poi, Kite flying, Knitting, Rappelling, Baseball

Introduction: My name is Annamae Dooley, I am a witty, quaint, lovely, clever, rich, sparkling, powerful person who loves writing and wants to share my knowledge and understanding with you.