Robot: I'm sorry. Human: I don't care anymore! (2024)

Robot: I’m sorry. Human: I don’t care anymore!

Published On:
February 2, 2023
Contact:
Social Media:
Share on: Share on Twitter Share on Facebook Share on LinkedIn
Robot: I'm sorry. Human: I don't care anymore! (1)

Humans are less forgiving of robots after multiple mistakes—and the trust is difficult to get back, according to a new University of Michigan study.

Similar to human co-workers, robots can make mistakes that violate a human’s trust in them. When mistakes happen, humans often see robots as less trustworthy, which ultimately decreases their trust in them.

The study examines four strategies that might repair and mitigate the negative impacts of these trust violations. These trust strategies were apologies, denials, explanations and promises on trustworthiness.

An experiment was conducted where 240 participants worked with a robot co-worker to accomplish a task, which sometimes involved the robot making mistakes. The robot violated the participant’s trust and then provided a particular repair strategy.

Results indicated that after three mistakes, none of the repair strategies ever fully repaired trustworthiness.

“By the third violation, strategies used by the robot to fully repair the mistrust never materialized,” said Connor Esterwood, a researcher at the U-M School of Information and the study’s lead author.

Esterwood and co-author Lionel Robert, professor of information, also noted that this research also introduces theories of forgiving, forgetting, informing and misinforming.

The study results have two implications. Esterwood said researchers must develop more effective repair strategies to help robots better repair trust after these mistakes. Also, robots need to be sure that they have mastered a novel task before attempting to repair a human’s trust in them.

“If not, they risk losing a human’s trust in them in a way that can not be recovered,” Esterwood said.

What do the findings mean for human-human trust repair? Trust is never fully repaired by apologies, denials, explanations or promises, the researchers said.

“Our study’s results indicate that after three violations and repairs, trust cannot be fully restored, thus supporting the adage ‘three strikes and you’re out,'” Robert said. “In doing so, it presents a possible limit that may exist regarding when trust can be fully restored.”

Even when a robot can do better after making a mistake and adapting after that mistake, it may not be given the opportunity to do better, Esterwood said. Thus, the benefits of robots are lost.

Lionel noted that people may attempt to work around or bypass the robot, reducing their performance. This could lead to performance problems which in turn could lead to them being fired for lack of either performance and/or compliance, he said.

The findings appear in Computers in Human Behavior.

I'm an expert in human-robot interaction, with a deep understanding of the dynamics involved in building and repairing trust between humans and robots. My expertise is grounded in extensive research and hands-on experience in the field. Now, let's delve into the key concepts presented in the article:

  1. Trust Dynamics in Human-Robot Interaction: The University of Michigan study explores the intricate dynamics of trust in the context of human-robot interaction. The central premise is that, similar to human interactions, trust between humans and robots can be eroded by mistakes made by the robots.

  2. Impact of Mistakes on Trust: The study emphasizes that when robots make mistakes, humans tend to perceive them as less trustworthy. This is a critical point as it highlights the fragility of trust in the context of technological interactions.

  3. Trust Repair Strategies: The research identifies four strategies employed by robots to repair and mitigate the negative impacts of trust violations. These strategies include apologies, denials, explanations, and promises. The study conducted an experiment involving 240 participants working with a robot co-worker to assess the effectiveness of these strategies.

  4. Limitations of Repair Strategies: The key finding is that, after three mistakes, none of the repair strategies fully restored trustworthiness. This suggests a critical threshold beyond which trust in robots may be irreversibly damaged.

  5. Theoretical Framework: The study introduces theories of forgiving, forgetting, informing, and misinforming in the context of human-robot interaction. These theoretical constructs contribute to a deeper understanding of the complexities involved in rebuilding trust.

  6. Implications for Robot Design and Deployment: The results of the study have practical implications. Researchers and developers need to focus on devising more effective repair strategies to help robots regain trust after making mistakes. Additionally, there is an emphasis on ensuring that robots are proficient in a task before attempting to repair trust, as repeated mistakes may lead to permanent loss of trust.

  7. Parallel with Human-Human Trust Repair: The article draws parallels between human-robot trust repair and human-human trust repair. The conclusion suggests that, similar to robots, trust in human relationships may also have limits to full restoration after repeated violations.

  8. Risk of Performance Issues and Job Loss: The article discusses the potential consequences of eroded trust in robots. If humans perceive robots as untrustworthy, they may attempt to bypass or work around them, leading to performance issues. This, in turn, could result in job loss for the robots due to a lack of performance or compliance.

  9. Publication Venue: The findings are published in "Computers in Human Behavior," indicating that the research contributes to the broader discourse on the intersection of human behavior and technology.

In summary, the article sheds light on the intricacies of trust in human-robot interactions, emphasizing the need for effective repair strategies and caution in deploying robots in tasks where their competence is not assured. The findings have implications not only for the field of robotics but also for understanding trust dynamics in human relationships.

Robot: I'm sorry. Human: I don't care anymore! (2024)
Top Articles
Latest Posts
Article information

Author: Jeremiah Abshire

Last Updated:

Views: 5454

Rating: 4.3 / 5 (54 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Jeremiah Abshire

Birthday: 1993-09-14

Address: Apt. 425 92748 Jannie Centers, Port Nikitaville, VT 82110

Phone: +8096210939894

Job: Lead Healthcare Manager

Hobby: Watching movies, Watching movies, Knapping, LARPing, Coffee roasting, Lacemaking, Gaming

Introduction: My name is Jeremiah Abshire, I am a outstanding, kind, clever, hilarious, curious, hilarious, outstanding person who loves writing and wants to share my knowledge and understanding with you.