Looks matter for robots. Mechanical robotic arms work in factories, not at the shopping mall’s receptionist counter. Then there are the social robots which look like humans, such as Jia Jia, the Chinese-developed lady-like robot, or Mindar, a robot priest serving in a Buddhist temple in Japan.  It may seem intuitive that the human-like robots will be more popular in an office setting. When managerial tasks such as giving negative feedback to the employee are delegated to the robot, wouldn’t it be great if this robotic supervisor looks like a human and shows more empathy?

Our research showed that the robot’s human-like appearance will put it in a bad spot instead. Published in the Journal of Experimental Social Psychology, the findings showed that when people received negative feedback from a robotic supervisor, they are more likely to behave spitefully to those that have more human-like features.

This is because people perceive these human-like robots to have bad intentions. We do not naturally attribute minds to non-human entities, but robots’ human-like appearance often allows us to do so. Thus, employees are more likely to perceive the negative feedback as abusive, and behave spitefully as retaliation.

In contrast, participants were not offended when they received negative feedback from non-human-like robots. It’s like when someone stepped on your toes—you only feel offended if you think the person does it on purpose.

Our laboratory studies were carried out with two different groups of university students in Asia. In the first study, we randomly assigned participants to two groups. One group reported to a robot supervisor that had a blank screen for a face and spoke with a mechanical voice; the other had a robot supervisor that was named Paul, had an animated screen for a face and spoke with an accent.

Each participant was paired with a confederate—a person who poses as a participant but whose behaviour is rehearsed prior to the experiment—and the team answered quizzes on football trivia. The robot supervisor would give negative feedback on the low scores, saying “You did below average”. Later, participants responded to a survey on their perceptions of abuse, such as “the robot was rude to me”, and on their intention to retaliate, e.g. choosing how much electricity to divert away from the robot, leaving it fully charged or go all the way to shutting it down.

The results showed that participants in the human-like robot condition perceived greater ill intent from the robot supervisor and a stronger feeling of being abused, even when all feedback was constant across the two conditions. These perceptions in turn lead to increased retaliation directed at the supervisor.

In the second study, we examined the same phenomenon, except that this time, the negative feedback was directed only to the confederate and the participants were just third-party observers. Still, they showed greater retaliatory action towards the more human-like robot supervisors.

It is important that we study how robots are used in the modern workplace. Although the notion of robot supervisors might seem far-fetched to some, the reality is that robots are already beginning to take on an array of managerial tasks, from job training and performance appraisal to hiring and firing. Our research cautions against the largely unquestioned recommendation to make robots human-like in appearance.

The article is an abridged version of the one first published in SCMP.