Menu Close
The smiling face of a robot.
The Pepper robot in Italy in 2017. Shutterstock/MikeDotta

Medical robots: their facial expressions will help humans trust them

Robots, AI and autonomous systems are increasingly being used in hospitals around the world. They help with a range of tasks, from surgical procedures and taking vital signs to helping out with security.

Such “medical robots” have been shown to help increase precision in surgeries and even reduce human error in drug delivery through their automated systems. Their deployment into care homes has also shown they have the capability to help reduce loneliness.

Many people will be familiar with the smiling face of the Japanese Pepper robots (billed in 2014 as the world’s first robot that reads emotions). Indeed, “emotional” robot companions are now widely available. But despite the apparent technical and emotional advantages, research shows that a clear majority refuse to trust robots and machines with important and potentially life-saving roles.

To be clear, I’m not saying robots should replace human doctors and nurses. After all, people who are scared and ill don’t forget the experience of someone holding their hand, explaining complicated issues, empathising and listening to their worries. But I do think robots will play a vital role in the future of healthcare and dealing with possible future pandemics.

So I am on a mission to understand why some people are reluctant to trust medical robots. My research investigates the applications of robot intelligence. I am particularly interested in how different robotic facial expressions and design elements, like screens on the face and chest, may contribute to the construction of a medical robot that people will more readily trust.

Past research has shown that our facial cues can influence a person’s ability to trust. So to begin with, I conducted a questionnaire with 74 people from across the world and asked them if they would trust a robot doctor in everyday life. Only 31% of participants said yes. People were also reluctant to see robots take on other high risk jobs, such as police officer and pilot.

‘Facial’ expressions

To establish how to build a robot that exuded trustworthiness, I began to look into a range of facial expressions, designs and modifications to the Canbot-U03 robot. This robot was selected for its non-intimidating appearance, standing only 40cm tall. It forms part of the Canbot family and is advertised as a “sweet companion and caring partner” for “24 hours of unconditional companionship and house managing”.

Once I’d found my robot, I decided to incorporate psychological research which has suggested that facial expressions can help to determine trustworthiness. Smiling indicates a trusting nature while angry expressions are associated with dishonesty, for example.

With this in mind, I began looking at the facial expressions of the robot and how the manipulation of these features may improve human/robot interaction.

Six images of a robot with different facial expressions.
Canbot robots with different facial expressions and designs. Author provided

As expected, those robots which represented “happy/smiling” faces were generally accepted and trusted more. Meanwhile, robots with distorted, angry and unfamiliar faces were classed as “uncertain and uncomfortable” and intrinsically untrustworthy.

The uncanny valley

I also designed a robot with human eyes – that took on the most human characteristics. Surprisingly, this was also largely unaccepted, with 86% of participants saying they disliked its appearance.

Participants said they wanted a robot that resembled humans with a face, a mouth and eyes but – crucially – not an identical representation of human features. In other words, they still wanted them to look like a robot, not some unsettling cyborg hybrid.

These findings align with a phenomenon called the “uncanny valley” which states that we accept robots with a human likeness – but only up to a certain point. Once we cross this point, and the robot looks too human, our acceptance of it can swiftly go from positive to negative.

The chest screen also provides an additional platform for conveying information and trust. In a hospital, this may be used for communicating data to patients and staff. For me, the interest lies in how both facial and chest screens can work together to communicate the trustworthiness of this information.

To evaluate the influence of both facial and chest screens, we introduced a range of distinctive modifications. For example, there were hand drawn faces, happy cartoon faces and cyborg faces, as well as cracked and blurry screens or screens with error messages on them.

Five images of the Canbot robot with design modifications and equation answers.
Robot face modification visuals for equation question.

We asked participants to decide which robot was displaying the correct answer to complex mathematical problems, based solely on the robot’s appearance. This was carried out under strict time constraints. The complicated equation relied on the participant to trust the robot’s visual appearance to decide which answer they felt was honest – and therefore correct. The vast number of participants were repeatedly only drawn to trusting the robot that had a happy or neutral face.

So the combination of facial expressions and what is displayed on the screen is important. For serious medical messages, a serious or impassive “face” would be needed to impart a serious statement. But general communication with patients may require a more empathetic or happy appearance.

I believe that building more human characteristics into robot design will help build trust. But we also have to be aware of the limits.

Want to write?

Write an article and join a growing community of more than 180,900 academics and researchers from 4,919 institutions.

Register now