With many consumer devices, we can find out about their status without having to look at them directly. When a kettle clicks off, you know the water has boiled. When the toaster pops, you know the toast is ready. The gentle hum of a freezer tells you it is functioning normally.
The more you use a particular device, the more you get used to its sounds. If you hear something different, like a car engine making an odd sound, it may be time to phone a mechanic.
Sound allows us to get messages while doing other things at the same tine. We also interpret their information about 40 milliseconds faster than visual cues. This is why, for example, athletic competitions still use starters’ pistols.
This kind of communication has carried over badly to the digital era, however. Computers and smartphones may make sounds to tell us things, but we tend to silence them or reduce them to a minimum.
Digital designers have tended to design sounds to be desirable only in isolation. They tend not to consider context. Even when we do leave the sound alerts on, for example, we often don’t know which device is the source because they sound so similar.
Devices don’t recognise that our listening evolves the more times we hear something, meaning we need to hear less once we know how the device works. They don’t take account of listeners with different hearing abilities, or that we need to hear something more engaging if we are distracted. Even extensive customisation options are usually of little help.
We put up with all this, of course, partly as sound alerts are less necessary when we are looking at screens anyway. Now, however, we are entering an era when we may want to reconsider this relationship. Welcome to the internet of things, where more and more household devices are becoming computerised, from televisions to fridges, to burglar alarms, to household lighting.
Household devices will increasingly communicate with one another and even evolve according to our requirements, offering dramatic increases in what they can do. The vision seems to be that we will control these devices through an intermediary: manufacturers have investigated robots, smart displays and voice assistants to varying degrees of success.
Instead, it would be better to draw on the likes of clicking kettles and popping toasters and program the devices of the future to use sounds to communicate with us directly – and listen at the same time. Otherwise, we will be passing up on a whole stream of information that could make the internet of things far more effective.
Alerts from these devices wouldn’t have to be very noisy. They could be set to only alert us when absolutely necessary. They could be programmed to make fewer sounds over time to acknowledge that the user is familiar with the device and operating it correctly.
Neither should past digital sound failures make us defeatist. There are endless physical sounds, and some success stories to draw on. The most famous translation from real to virtual is perhaps the sound of paper being crumpled when you click to empty trash on many computers. People tend to leave this auditory icon on, perhaps as it is easy to comprehend and remember.
While direct correlations like this don’t always work, the underlying language of sound design is similar in either world. Loud sounds are perceived as more important than quiet ones. High-pitched sounds are easier to locate; short, irregularly timed sounds capture attention more easily; and the size of an object is conveyed by its ratio of high to low frequencies, with more low frequencies describing a bigger object. If designers more closely follow rules like these, they can produce useful alerts.
Sound alerts are only a small part of the picture. The bigger prize is enabling devices to “hear” – both other devices and other sounds in a house.
This could be done relatively easily using similar technology to audio watermarking, where very subtle audio is embedded into a music track and a piece of software enables a computer to count plays for copyright purposes. In the case of household devices, all they would require is a microphone, loudspeaker and relevant software.
Fridges or lighting displays could then analyse their auditory environment when they were switched on, for example. They might alter their own sounds to complement the sounds coming from other devices, without the user even being aware of the change.
A smoke alarm that senses fire might work out which device is on fire through the sounds it is making, then switch it off. Taps could turn themselves off if they heard water splashing on the floor. Doors might lock if they heard snoring inside a room. Children crying could trigger a soothing night light or music, or a microphone to hear their mum or dad’s voice.
Devices could introduce sounds subtly and turn them up if they went unnoticed. Or if a sound was on too similar a frequency to the burglar alarm, say, the device could automatically change it to something else.
As all devices would be online, sound designers could monitor usage to maximise effectiveness and influence future designs. They may give devices a broad palette of sounds and update them automatically. It would happen seamlessly, without users needing to consult small screens to get the feedback they want.
The internet of things looks likely to revolutionise our relationship with our household devices, but it will work much more efficiently if devices make sounds and “hear” in this way. Designers need to make this a priority and learn from mistakes with digital devices thus far. Let’s not undermine this great opportunity by keeping things quiet.