Sad eyes and metallic sounds can evoke a strange form of empathy, even though robots cannot actually feel pain. This phenomenon was explored by Marieke Wieringa of Radboud University in the Netherlands, who conducted a study for her doctoral thesis. She highlights how our predisposition to feel compassion even for inanimate objects can be used and manipulated by companies. Through various experiments, Wieringa and her team analyzed people’s reactions to acts of violence against robots. In some situations, the robots did not respond at all, while in others they made sounds or gestures that evoke pain.
Study participants were more likely to feel guilty when interacting with robots that appeared to express emotions than when interacting with robots that did not. This suggests that a robot’s ability to evoke compassion can significantly influence human behavior. Wieringa argues for the importance of establishing guidelines for the use of simulated emotions by robots and chatbots. However, he also recognizes the potential benefits of emotional robots, for example in the context of rehabilitation therapy for people who have suffered trauma.
This highlights an interesting aspect of our behavior: although we consider ourselves rational and logical, emotions play a crucial role in our daily interactions. Wieringa’s research invites us to reflect on how emotions influence the way we perceive and relate to technology. The human capacity to feel empathy is not limited to living beings but also extends to artificial entities, creating complex situations to be managed in the future development of robotics and artificial intelligence. It is essential to consider how these interactions may evolve and what rules we may need to implement to ensure the ethical use of technologies that mimic human emotions.