Menu Close
vectorfusionart/Shutterstock

Online gambling: children among easy prey for advertisers who face few sanctions

With concerns growing that children and vulnerable people are being targeted by rogue online gambling advertising, my new research suggests the current sanctions aren’t enough to change the practices of online advertisers.

In April 2019, the UK’s Advertising Standards Authority (ASA) ran an experiment using an advertising avatar, an online identity which mimicked the internet use of a child. It found five gambling brands were specifically targeting their gambling offers at under 18-year-olds. A 2017 survey by the Gambling Commission found that 12% of children aged 11 to 16 had gambled with their own money in the previous week, and that 0.9% of children were problem gamblers.

In the wake of its experiment, the ASA announced a change to its guidelines stipulating that online gambling advertising must not be targeted at minors and must not appear on sections of websites of high interest to children. But it’s uncertain whether this will solve the problem. To date there is little evidence that the algorithms used by advertising exchanges prevent the exposure of gambling adverts to children.

Given the financial incentives involved for advertisers, and the lack of tough sanctions if they break the existing rules, this is unlikely to change. Under the current regulatory system, advertising exchanges are not subject to sanctions other than negative publicity, as the ASA cannot impose fines.

Targeting the vulnerable

New research my colleagues and I have carried out identified two fundamental problems for the regulation of gambling advertising online.

First, we found that the automation of advertising placements through ad exchanges leads to adverts being targeting at children and vulnerable people. Through these exchanges, run by tech giants such as Google and Facebook, online advertising is targeted at viewers based on an online profile linked to their previous consumption and browsing patterns.

The fundamental difference to offline advertising is this data matching process is driven by artificial intelligence and machine learning. This is built in such a way that the more likely a particular user is to click on an ad, the more it costs a company to advertise to them and so the more money the company hosting the advert will make. This placement process follows statistical criteria based on probability and hard economics, with little regard to ethical or legal standards.

In practice, what this means is that if a user’s online profile indicates they have potentially addictive behaviour, are unemployed, have low socio-economic standing, debt issues, or past episodes of problem gambling, they are more likely to be shown gambling ads while visiting non-gambling content online. A 2017 investigation by The Guardian found gambling companies were using third-parties to harvest information from people who enter prize draws and similar competitions in order to target people on low incomes with gambling advertising.

This automation process also makes it likely that social responsibility standards and ethical considerations are being seriously undermined and that advertising is targeting children and the vulnerable.

Advertising algorithms make more money from the vulnerable. Dana.S/Shutterstock

Hidden advertising

In our research, we also found that social media websites provide ample opportunities for peer-to-peer marketing between users, blurring the lines between commercial advertising and user-generated content. So for example, if a social media user brags about a bet they made, it can be unclear whether they have been paid by a gambling operator to do so. This raises the issue of whether advertising is fair to consumers when it cannot be recognised as an advert, but appears more like a recommendation.

Both these problems with online advertising of gambling have been addressed by ASA through guidelines on protecting young people and what constitutes an advert. In the UK, social media users are required to disclose whether they have received a payment, free gift, or other perk for a post, by using #ad. But this is often not prominent and it’s not necessarily clear to the user seeing the post what it actually means – and the sanctions for breaching these rules have no real teeth. More fundamental legal changes and stricter enforcement is required, more than just tinkering with the rules at the edges.

Artificial intelligence used by ad exchanges should comply with a “safety by design” principle. Those responsible for designing big data applications used in the advertising ecosystem should comply with consumer protection and gambling laws. A hard look is required to force ad exchanges to build their algorithms in such a way that doesn’t lead to the exploitation of vulnerable users.

Social media sites should also create strict rules for their users obliging them to prominently identify commercial relationships with gambling advertisers. Instead of turning a blind eye, social media platforms should police their rules on undisclosed advertising and use automated tools to monitor whether users breach these rules. As a last resort, a powerful regulator should step in and enforce fair advertising principles through fines and sanctions.

Want to write?

Write an article and join a growing community of more than 181,800 academics and researchers from 4,938 institutions.

Register now