Menu Close

Technophrenia

Trust may be important, but is it enough to rescue journalism from the Internet?

Robot Journalism.

Last week, prominent tech site Gigaom ceased operations with the terse note “Gigaom recently became unable to pay its creditors in full at this time”. Started in 2006 by Om Malik, the site had raised about $40 million over that period to create a technology news site, an IT analysis business and another business running IT events. None of them could make enough money to cover the $400,000 a month needed to keep the business going.

For a site that covered the future of journalism and media in detail, it turned out that it had little insight into how to succeed in a landscape that is setting legacy media and digital media alike, in a continuous struggle to survive.

The shutdown of Gigaom follows on the heels of AOL’s shutting down of two tech sites earlier this year. TUAW (The Unofficial Apple Weblog) and Joystiq were both closed by AOL as part of its process of “simplifying the portfolio of brands”.

The problem that digital media sites face is that, unlike traditional print where their markets are protected geographically, websites largely compete on a level playing field, albeit one that is determined in part by Google’s (and others) ranking of sites in its search engine. Every tech story that is released gets rapidly echoed by literally hundreds of sites.

Recycled tech news

Take a recent article about Microsoft’s decision to release its personal assistant technology Cortana, across multiple platforms. A search in Google News brings up 290 versions of the same story.

The ultimate irony is that the originating “exclusive” for this story from Reuters actually comes 15th in the list of news sorted by Google in order of “relevance”.

As the majority of these sites make money from advertising, the inclusion of stories on any given site is motivated not by journalism, good or bad, but by the need to fill the site with constantly refreshing content. In fact, the job of journalism becomes solely one of copy editing, adjusting an already published story for style, format and length, for an individual site.

The danger with this for those employed in this sector (or perhaps a blessing to finally convince them to do something else more worthwhile), is that computers are getting very much better at being able to generate this type of content. Algorithms will be able to take a press release, newswire story, or simply any other story circulating on the Internet and generate a new one with the right mix of specific language tied to brands, advertising, and possibly reader interest.

Recycled news

This state of affairs is not simply related to technology news. Taking any random headline from the NY Times; for example a story about CIA funds falling into the hands of Al Qaeda in Afghanistan gives 83 articles all repeating the same story as reported by the NY Times. At least in this case, the NY Times appears top of the list in the Google News search.

The Senior Director of News and Social Products at Google, Richard Gingras and Sally Lehrman, a fellow of the Markulla Center for Appled Ethics, have suggested that the current situation has eroded the public’s [trust] of journalism in general. They quote reports from the Pew Research Center showing that “55 percent of Americans said they simply did not expect a fair, full and accurate account of the days’ events and issues. As many as 26 percent said they did not trust the news to get facts right.”

The Trust Project

Gingras and Lehrman’s solution to this lack of trust in journalism is the aptly titled, “The Trust Project”, a new effort led by the authors and the Markkula Center for Applied Ethics at Santa Clara University. This project proposes that all journalists should disclose their expertise and any conflicts of interest and engage in rigorous citation in their writing. The sites that they write for should also have a published code of ethics.

Whilst these aims are highly laudable and should be common practice (The Conversation implements all of these principles), it is not clear that it will solve the ultimate problem that the vast majority of content on media sites on the Internet is derivative. One estimate by journalist Nick Davies claims that only 12% of stories featured in the UK’s quality press were original, the rest were what is often called “churnalism”. On the Internet that number will be higher just by the nature of the sheer volume of content that is produced every day.

Journalism academic Jeff Jarvis has suggested additional recommendations to the Trust Project that involve Google News ranking derivative stories below the original sources in their search results. Judging the ultimate quality of articles in this way might prove difficult to do algorithmically.

Gigaom’s passing will largely go unnoticed. There is an endless supply of other sites filling the void, although a vanishingly small number of them will be paying journalists or operating as a business making a profit.

Want to write?

Write an article and join a growing community of more than 181,000 academics and researchers from 4,921 institutions.

Register now