“Good Robot Design or Machiavellian? An in-the-wild robot leveraging minimal knowledge of passersby’s culture
Social robots are being designed to use human-like communication techniques, including body language, social signals, and empathy, to work effectively with people. Just as between people, some robots learn about people and adapt to them. In this paper we present one such robot design: we developed Sam, a robot that learns minimal information about a person’s background, and adapts to this background. Our in-the-wild study found that people helped Sam for significantly longer when it adapted to match their background. While initially we saw this as a success, in re-considering our study we started seeing a different angle. Our robot effectively deceived people (changed its story and text), based on some knowledge of their background, to get more work from them. There was little direct benefit to the person from this adaptation, yet the robot stood to gain free labor. We would like to pose the question to the community: is this simply good robot design, or, is our robot being manipulative? Where does the ethical line lay between a robot leveraging social techniques to improve interaction, and the more negative framing of a robot or algorithm taking advantage of people? How can we decide what is good here, and what is less desirable?
Elaheh Sanoubari, Stela H. Seo, Diljot S. Garcha, James E. Young, Verónica Loureiro-Rodríguez. “Good Robot Design or Machiavellian? An in-the-wild robot leveraging minimal knowledge of passersby’s culture”. In Proceedings of alt.HRI track, ACM/IEEE International Conference on Human-Robot Interaction. 2019.