Convergence Review: a bet each way on user-generated content

The Convergence Review came close to understanding the nature of user-generated content but not quite. Flickr/Bruce Clay, Inc

The Australian Federal Government’s Convergence Review, released yesterday, had a mammoth task. It was trying to establish just how to regulate the future standards, conduct, and technical aspects of today’s media, with all its differing platforms.

How do you regulate and protect consumers and producers at a time when we can watch television on our mobiles and user-generated web video on the television?

The government has proposed a single regulator for all media platforms. But the Review’s stance on regulation is somewhat inconsistent, at least when it comes to the thorny issue of user-generated content in social media.

It could have been more bold too on Copyright issues, representing a lost opportunity to push for improved positive fair use rights for the use of copyright material in user-generated content.

Regulation of user-generated content

The Review consistently argues that regulation should have a “light touch” and only be focused on “significant enterprises.”

The report states that this is “a significant change moving away from regulation defined by the platforms on which services are delivered” and that “content from social media including bloggers and user-generated content should be free from new regulation.”

This could be seen as a real step forward in governmental understanding of the nature of social media platforms and an implicit recognition that freedom of expression is a critical aspect of a converged media.

But the review falls short not only on regulation but also on the relationship between convergence and freedom of expression.

Industry or government regulation?

The Review argues that industry regulation of user-generated content is likely to be better than governmental regulation, but also notes the problem of limited accountability.

The report outlines the problem that some hosts of user-generated material are “only scrutinised if users complain” meaning they have “limited accountability for their content.”

But the Review’s authors try to have a bet each way here. The report approvingly cites a 2008 investigation by the UK House of Commons Culture, Media and Sport Committee into online content, which stated:

“It is not standard practice for staff employed by social networking sites or video sharing sites to preview content before it can be viewed by consumers. Some firms do not even undertake routine review of material uploaded, claiming that the volumes involved make it impractical. We were not persuaded by this argument, and we recommend that proactive review of content should be standard practice for sites hosting user-generated content.”

This is strongly related to the kind of potentially crippling requirement that led to the anti-SOPA/PIPA Internet Blackout in January this year. The anti-SOPA/PIPA blackout was in response to the proposal for active monitoring of copyright violation.

In the Review, what and how much a “proactive review of content” is supposed to encompass is not directly defined, which might open the door to very wide monitoring and other requirements.

The report’s authors seem most concerned about preventing the viewing of inappropriate content, especially by children. The Review proposes that there needs to be a combination of local technologies, such as parental locks, and more infrastructural technologies and regulations.

This is where the Review starts to stray into more-or-less tacit approval of various schemes to restrict access to material deemed inappropriate. It does not, thankfully, recommend the “Rabbit-Proof Fence” approach of ISP filtering that failed in 2010, but nor does it argue strongly against it.

Instead, Appendix F covers links between the Review and the ALRC National Classification Scheme review. But remains vague about its own conclusions on the issue.

Retransmission, copyright, and fair use

In the guise of not treading on the toes of the upcoming ALRC Copyright Review, the Review takes a very timid stance on discussing copyright issues.

The Review discusses only “retransmission” of broadcasts across platforms, not other copyright issues brought up by user-generated content.

I am thinking particularly here of the positive fair use of images, video, or audio from copyright sources mixed into/mashed-up in user images, videos, or audio. This is a central issue in the convergence debate, but the Review basically leaves this to a single paragraph at the end of a small section on retransmission:

“Noting the recommendation that there be no licence required to provide any content service (see Chapter 1), the current retransmission rules will need to be reviewed…The Convergence Review proposes that the issue of retransmission be examined as part of this ALRC review.”

It continues: “The Review also proposes that in investigating content-related competition issues, the regulator should have regard to copyright implications and be able to refer any resulting copyright issues to the relevant minister for further consideration by the government”.

The omission of even defining positive fair use as a convergence issue, let alone taking a stand, is unfortunate. It is a missed opportunity to push for positive user rights to the fair use of copyright content.

While I agree that the ALRC Review is indeed the place for a fully detailed investigation of retransmission and other issues, the Convergence Review does not strongly take up the cause or rights of active users except as multi-platform consumers.

The ALRC Review, then, will not be able to treat the Convergence Review as a source of alignment between notions of convergence with those of personal fair use.

The Conversation is a non-profit + your donation is tax deductible. Help knowledge-based, ethical journalism today.