From spreading fake news, to fostering narcissism and online bullying, social media is under increasing fire. The question of how to harness its potential while limiting negative effects is one of the biggest of our age. And its effects on children’s physical and mental health is perhaps one of the greatest challenges.
Yet among all this debate about how social media has changed our lives, children’s exposure to advertising on social media is rarely discussed. This is ironic, as advertising pays for social media. It drives the design of new platforms, which relentlessly seek to capture users’ attention.
Advertising to children is widely regarded as ethically problematic. Young children cannot distinguish between advertising and editorial or entertainment content; and older children, even if they rationally understand the selling intent behind advertising, are often still subject to its emotional and unconscious influence.
Junk food advertising, which is linked to increased child weight and obesity, sharpens this ethical issue, compounding it with health concerns. From three years of age, children recognise more unhealthy than healthy food brand logos. Children hold many rights under the UN Convention on the Rights of the Child, including the right to health, which governments have a duty to protect in the best interests of the child. Yet very few states have regulated food advertising effectively to fulfil their legal obligations under the convention.
Regulators do seek to protect children from harmful effects of TV advertising. But they typically focus on advertising “targeted at”, “directed at”, or “designed to attract the attention of”, children. These phrases have proven far too narrow. Most adverts that children see in broadcast media or the physical environment do not specifically “target” them; they are shown during family TV programmes such as prime-time sitcoms and reality shows, on billboards and bus shelters, or around sports fields where children and families watch their teams play.
In 2010 Ofcom (the UK’s communications regulator) published a review of the effectiveness of its 2007 rules banning junk food advertising in and around UK children’s TV programmes. It concluded that broadcasters had largely complied, but advertisers shifted to unregulated programmes, and as adult airtime accounted for nearly 70% of children’s viewing, children were still exposed to high levels of junk food advertising.
Therefore, as the WHO has repeatedly stressed since 2010, children’s overall exposure to junk food marketing needs to be reduced, wherever it’s encountered. And now, as the broadcast era cedes to online media dominance, ethical and health concerns about junk food advertising to children are magnified.
Social media platforms hold vast data banks on all their users, offering advertisers detailed menus of options for targeting ads. They do so not only with basic demographics such as age or location, but even psychological characteristics and preferences, increasing all consumers’ susceptibility to advertising.
Platforms also use children’s data to hone ad targeting. They identify children who are most interested in or vulnerable to junk food and its advertising, thereby sharpening children’s vulnerability and posing profound ethical questions about the business of advertising persuasion in the 21st century.
Yet the very means of targeting children with ads in social media now provides regulators with an opportunity. Governments could protect children much more effectively – if they were brave, and if the food industry, advertisers and social media platforms complied.
We are particularly intrigued by new provisions in a new code adopted by Ireland’s Department of Health, even if it is voluntary, rather than mandatory. The code contains a potentially disruptive new provision stating that “marketing Communications for HFSS [high fat, salt and sugar] food by means of social media shall not target children under the age of 15”.
This is a significant advance on recent UK online junk food marketing restrictions, because it applies to all social media, rather than to sites targeting children. Why might it prove so powerful? Because the very concept of “targeted at” children, which was not effective at regulating marketing in other media, could now attack the precise way in which ads are pushed out to children in social media.
Will it be effective?
It remains to be seen how this provision will apply. Ireland’s code of practice is merely voluntary, rather than mandatory, and without effective enforcement, it could create a false sense of security, as earlier regulations have done. And there is uncertainty about the accuracy of age information in social media. We know that many children lie about their age to be able to sign up to certain platforms.
On the other hand, this provision might eventually prove even more radical than it first appears. Consider other means of spreading advertising around social media, such as sharing posts or tagging friends, which advertisers frequently urge users to do. Theoretically this new provision in Ireland’s code could – and we argue that it should – mean that tagging and sharing junk food ads with under-15s is also barred.
If that were the case it could have a powerful consciousness-raising effect: every time you tried to tag a young person, you would get a reminder of the role that junk food advertising plays in childhood obesity. Now that really would be progress for children’s health and rights.