Dogs really are our best friends. A study published today in Current Biology shows not only do dogs and humans read emotions in each other’s “voices”, but both are more attuned to “happy” sounds.
And the non-invasive study, which used low-stress neuro-imaging techniques to compare the brains of humans and another species, is to be applauded.
Attila Andics and colleagues from Budapest, Hungary, trained dogs to lie motionless wearing headphones in a noisy brain scanner, allowing scientists to map the voice-sensitive areas of their brains.
Two of the study’s authors – Ádám Miklósi and Márta Gácsi – are well known for their research into dog and wolf behaviour, as well as the human-dog relationship as researchers for the Family Dog Project.
To conduct a successful scan of an awake (not anaesthetised) animal’s brain (using functional magnetic resonance imaging or fMRI), it must remain motionless throughout the scan. The information gleaned can help us map higher cognitive functions of the brain, as different areas of the brain “light up” when the animal is exposed to various stimuli (such as sounds).
Unfortunately, most animals will not lie motionless in an fMRI machine without being restrained. As a result, decades of research on monkeys has been conducted in laboratories where monkeys have had surgically-implanted head posts, ear bars and skull pins to hold their heads still while they were restrained in chairs.
More recently, special helmets have been designed and monkeys trained to sit in a chair in an attempt to find less invasive neuro-imaging techniques.
… and stay! Good dog!
Dogs will do almost anything to please their owners, so perhaps it is not surprising that owners or handlers can readily train their dogs to lie motionless with heads resting in a head coil in an fMRI machine.
(Another recent study – in the video above – examined two dogs using a slightly different technique where dogs were sitting sphinx-like rather than lying forwards with their chins on the bed.)
As well as being able to see their owners in front of them while in the machine, the dogs were also rewarded with food and praise.
Not only did each of the 11 dogs in this study have to lie motionless on the scanner bed for six minutes at a time (three separate scans each), each dog wore earphones so that various sounds could be played to it.
In order to compare human and dog brain activity in the voice-sensitive regions, the 11 dogs and 22 humans listened to the same set of almost 200 sounds through the earphones, which included vocalisations of other dogs, humans, environmental sounds (non-vocal) or a “silent” (no sound) baseline.
The human and dog sounds varied in “emotional valence” with some sounds representing negative emotions (stressed/“unhappy”, such as whining or crying) and other positive emotions (“happy”/relaxed, such as playful barking or laughing) – as rated by humans.
Interspecies emotional tuning
The most striking similarities were in the common responses to emotion in the dogs and humans. The fMRI images showed that the sounds linked to negative or positive emotions were processed in a similar way in the auditory cortex – the part of the brain primarily responsible for processing sound – of both species.
The researchers point out that both species are sensitive to emotion expressed through voice in other humans and dogs, and the voice areas of mammalian brains may have a longer evolutionary history than previously thought, dating back some 100 million years.
When humans and dogs listened to “happy” sounds, a localised area in the right caudal ectosylvian gyrus (cESG) near the primary auditory cortex lit up. This was not the case when participants heard “unhappy” sounds.
Not all the findings showed such striking similarities. Some of the auditory or sound-sensitive areas were localised in different regions of the brain.
In dogs, 48% of their sound-sensitive regions lit up when they heard environmental (non-vocal) sounds compared to just 3% for humans with the same sounds.
For humans, 87% of their sound-sensitive regions lit up when they heard other human vocalisations, and 10% for dog vocalisations.
For dogs, this was 39% when they heard other dog vocalisations and 13% for human vocalisations. In other words, both dogs and humans have voice areas of the brain that prefer (respond to) conspecific vocalisations (“voices” of members of their own species).
Humans and dogs are both social species with a long shared history, so while it’s no surprise that dogs and humans read each other’s emotions in “voices”, it’s interesting that we’re more sensitive to “happy” sounds.
It also appears that humans are particularly attentive to sounds of other humans, whereas dogs are perhaps more attentive to other sounds in the environment.
The authors conclude that being able to process voices of other species may allow cross-species call recognition (very useful for predator avoidance and hunting).