Your face plays an important role in the experience and expression of emotion. Yet despite the complexity of the human face, which has 43 muscles in all, most of existing facial expression research focuses on six “basic” emotions: happiness, surprise, sadness, anger, fear and disgust.
In a study published today in the Proceedings of the National Academy of Sciences, researchers from Ohio State University sought to expand the focus to a much more varied set of faces.
Their goal was to develop rules for these complex states and build a computer algorithm to identify them, with an eye, presumably, towards developing machine learning capabilities and making smart devices smarter.
Amazingly, the algorithm was able to correctly guess those complex facial expressions more than three-quarters of the time.
We feel happy when joking with friends, sad when we lose a loved one and disgust when we come across something rotten in the fridge.
Our emotional experience can also be much more complex. We might feel happy for a close work colleague when she gets promoted, but also disappointed that we were passed over. For scientific investigation of emotion, it is important to account for such complexity.
The researchers of today’s study proposed that more complex emotions (what they called “compound emotions”) are combinations of basic emotions. For example:
- hatred is a combination of anger and disgust
- awe is a combination of fear and surprise.
The authors reasoned that perhaps the facial expressions of these complex states are combinations of the displays representing “basic” emotions. In other words, if you added elements of an angry face to those of a disgusted face, you’d end up with a face that conveys hatred.
As a first step, the researchers collected photos of people posing faces associated with emotions. Some 230 participants were given an emotion word (such as “disgust”), an example of a situation that might elicit that emotion (such as “a foul odour”) and a photo of someone else making the expression. A photo of each of 22 posed faces (six basic, 15 compound and one neutral) were snapped.
The resulting 5,060 photos were evaluated using a popular coding system in emotion research called the Facial Action Coding System, or FACS. The coding scheme notes which facial muscles are moved in an expression, such as nose wrinkling in the “disgust” pose.
The compound emotion expressions did indeed appear to be combinations of the muscle changes associated with their componential counterparts – see the “happily disgusted” expression below. (If you’re curious as to what “happily disgusted” means, researcher Aleix Martinez described it as: “how you feel when you watch one of those funny ‘gross-out’ movies and something happens that’s really disgusting, but you just have to laugh because it’s so incredibly funny.”)
One of the interesting observations from evaluating the facial expressions is that some of compound emotion poses required conflicting muscle movements. Clearly, it would be impossible to simultaneously have one’s lips parted for happiness and lips pressed together for disgust in “happily disgusted”.
A number of unique muscle movements were also observed in some compound states (lips were parted in 43% of actors displaying sadly disgusted, despite lips parting not being a component of the sad or disgusted basic emotion expression).
After the photos were analysed, it was time to test whether a computer could accurately identify the compound facial expressions.
The research team wrote and trained a computer algorithm using similar emotion expression data from other research groups, then they set the algorithm loose on their 5,060 photos of basic and compound posed emotions.
How did the computer do? The algorithm was highly successful at identifying the basic emotion expressions (96.86% correct) and slightly less successful at identifying the compound emotion expressions (76.91% correct).
Is there an app for that?
So what does this mean for the promise of computers detecting our emotions? Does the ability to classify compound emotion expressions mean smart devices will become smarter? The answer is a confident “yes – and no”.
Recognising that human emotional life is far more complex than a small set of emotions is an important first step in improving technology at the human-machine interface.
Demonstrating that computers can be trained to identify more complex emotion expressions speaks to the potential ability of computers to, at some point in the future, recognise what we are communicating with our faces – but a number of critical questions remain.
First, in this study, as in many others, researchers used posed faces. This means that they asked people to move their faces in certain ways by prompting them with emotion words, hypothetical scenarios and visual examples.
Yes, we might be able to get computers to classify those faces. But if people don’t generate them outside the lab, this technology will have limited utility.
Second, the concept of compound emotions remains entirely theoretical — do people experience these states in the first place? Other research has uncovered the nuanced nature of “mixed” emotional experience, and there is debate about whether mixed emotions even exist.
Regardless, the next time you find yourself in a happy yet disgusting situation, try to catch a glimpse of your face – it could be what your “happily disgusted” looks like.