When I started in TV journalism three decades ago, pictures were still gathered on film. By the time I left the BBC in 2015, smartphones were being used to beam pictures live to the audience. Following the digital revolution and the rise of online giants such as Facebook and Google, we have witnessed what Joseph Schumpeter described as the “creative destruction” of the old order and its replacement by the innovative practices of new media.
There has been a great deal of furious – and often hyperbolic – discussion in the wake of the US election, blaming the “echo-chamber” of the internet – and Facebook in particular – for distorting political discourse and drowning the online public in “fake news”. Antidotes are now sought to ensure that “truth filters” guard the likes of Facebook – and its users – from abuse at the hands of con artists wielding algorithms.
Facebook and Google are now the big beasts of the internet when it comes to distributing news – and as they have sought to secure advertising revenue, what has slowly but surely emerged is a kind of “click-mania”. This is how it works: the social media platforms and search engines advertise around news stories, which means that the more clicks a story gets the more eyeballs see the social media sites’ advertising, which generates revenue for them. In this media environment, more clicks mean more revenue – so the content they prioritise is inevitably skewed towards “clickbait” – stories chosen for their probability of getting lots and lots of readers to click on them. Quality and veracity are low on the list of requirements for these stories.
It is difficult to argue that this did not impact on online editorial priorities with hyperbolic headlines becoming ever more tuned to this end. At times, on some platforms, it resulted in what Nick Davies dubbed “churnalism”, whereby stories were not properly fact-checked or researched.
Erosion of trust
Consumption patterns are inevitably affected by all this creative destruction and social media sites have quickly replaced “the press” as leading sources of news. And yet there is the danger that the resulting information overload is eroding trust in information providers.
The outgoing US president, Barack Obama, captured the dilemma the public faces on his recent trip to Germany:
If we are not serious about facts and what’s true and what’s not, if we can’t discriminate between serious arguments and propaganda, then we have problems.
There is a renewed recognition that the traditional “gatekeepers” – journalists working in newsrooms – do provide a useful filter mechanism for the overabundance of information that confronts the consumer. But their once-steady advertising revenues are fast being rerouted to Facebook and Google. As a result, traditional news companies are bleeding to death – and the currently popular strategy of introducing paywalls and subscriptions are not making up the losses. Worse still, many newspapers continue to suffer double-digit falls in circulation, so the gatekeepers are “rationalised” and the public is the poorer for it.
Rise of the algorithm
One of the answers lies in repurposing modern newsrooms, which is what the likes of the Washington Post are doing under its new owner Jeff Bezos. Certainly, journalists have to find ways of encouraging people to rely less on, or become more sceptical of, using social media as their primary source for news. Even Facebook has recognised it needs to do more to avoid fakery being laundered and normalised on its platform.
So how to avoid falling for fakery? One option involves the use of intelligent machines. We live in a media age of algorithms and there is the potential to use Artificial Intelligence as a fundamental complement to the journalistic process – rather than simply as a tool to better direct advertising or to deliver personalised editorial priorities to readers.
Software engineers already know how to build a digital architecture with natural language programming techniques to recognise basic storylines. What is to stop them sampling a range of versions of a story from various validated sources to create a data set and then use algorithms to strip out bias and reconstruct the core, corroborated facts of any given event.
Aggregation and summation techniques are beginning to deliver results. I know of at least one British tech start-up that, although in the research and development phase, has built an engine that uses a natural language processing approach to digest data from multiple sources, identify a storyline and provide an artificially intelligent summary that is credible. It’s a question of interpretation. It is, if you will, a prototype “bullshit detector” – where an algorithmic solution mimics the old-fashioned journalistic value of searching for the truth.
If we look at the mess our democracies have fallen into because of the new age of free-for-all information, it is clear that we need to urgently harness artificial intelligence to protect open debate – not stifle it. This is one anchor of our democracies that we cannot afford to abandon.