Campaigns like #MeToo and Time’s Up mean that public discussion about sexual harassment has finally bubbled up to the surface. The movements also highlight how such disturbing incidents have routinely gone unreported or been outright ignored.
To find out the extent of the problem, our team at the universities of Sussex, Derby and UCL recently conducted a review on the lack of reporting of workplace harassment and discrimination, by summarising the past 18 years of relevant research in this area.
The findings informed our development of a chatbot, which uses a computer-controlled robot to mimic a human interviewer, for recording and reporting workplace harassment anonymously. Spot is powered by natural language processing – a basic branch of artificial intelligence and computer science – which understands and interprets human language.
The Spot chatbot – designed to improve the reporting process for those who have experienced or witnessed workplace harassment or discrimination – guides the participant through an evidence-based cognitive interview without requiring them to talk to a human. This is used to get a high quality account of what happened that avoids leading questions that can produce false memories.
It also removes barriers to harassment reporting, because participants can log incidents without talking to a human. This bypasses concerns about trust, confidentiality and doubts being cast over harassment allegations.
Spot gives the participant a time-stamped report at the end of the session. Users can keep this as evidence in case it is needed later. It can also send an anonymous version of the report to the user’s employer at the request of the participant. The report is deleted from Spot’s servers 30 days after it is downloaded by the participant.
The growing awareness of the sexism and power dynamics that underpin toxic workplaces shows why it’s time to start a broader dialogue about harassment and discrimination, and to find better ways of tackling this issue.
With our research colleague, Rashid Minhas, we broadly reviewed preliminary research into all forms of workplace discrimination to try to understand how much harassment is experienced, how much is reported and what the repercussions of reporting were for people speaking out against abuse. Different studies approached the same issue in fundamentally different ways, making them difficult to generalise. Nonetheless, we found some startling figures.
We identified six main problems associated with reporting workplace harassment and discrimination and evidenced by studies:
Workplace harassment is rarely reported. The most comprehensive estimate from a sample of 91,503 filed charges in the US found that about 70% of harassment cases went unreported. However, most studies found reporting rates that were far lower than this. In some industries, such as the restaurant business, just 3% of harassment and discrimination is logged.
There are many barriers to reporting. When asked why, people stated that they did not know how to report harassment allegations; they feared retaliation citing negative consequences when others reported incidents; they were reluctant to disclose personal issues or protected characteristics (like their sexuality); and they had concerns about trust and confidentiality.
No health-related consequences have been found when reports were handled appropriately and management responses were prompt. In other words, it was not always the harassment itself that was related to poor health, but the ways in which complaints were handled.
Harassment is linked to negative workplace outcomes, studies show. Harassed people were less satisfied with their jobs, and were more likely to want to leave. Harassment was also found to have led to bad press or litigation.
Chatbots can explain what needs to happen if a person has been harassed or discriminated against. They can tackle otherwise difficult conversations. They don’t judge, and they don’t have the same biases that humans do. They are available online day and night. They can allow for confidentiality and anonymity, reducing potential backlash.
The bots can be designed to ask the best questions every time, although not all do. Microsoft’s chatbot Tay – a machine learning experiment in which it was hoped that the bot would appear as a young woman interacting with others on Twitter – serves as a cautionary tale. It spectacularly backfired when Tay started tweeting Nazi, anti-feminist and racist views, after the project was hijacked by miscreants.
However, it has become clear from our research, and the #MeToo campaign, that many people want to speak up about harassment at work but are still reluctant to report it. Chatbots can offer reliable and scalable services to empower people to come forward by removing some of the barriers to reporting.
More voices must join the conversation to speak out against harassment – only together can we change the institutional structures that have allowed it to flourish. We hope that Spot, and tools like it, will help facilitate improved reporting in the workplace.