The debate over the impact of playing violent videogames on aggressive attitudes and behaviour is a long and heated one. Based on my research, I would argue the link is not as straightforward as A + B = C: we should be looking further afield than our gun-toting avatars.
There are researchers who see the link between violent videogames and antisocial behaviour as comparable to that between cigarette smoking and lung cancer, and there are researchers who see no link at all.
Why this massive discrepancy in results?
I recently found myself in the middle of this debate when I (and my coauthor Mark Nielsen) published a paper in the journal PLoS One, presenting three experiments that failed to replicate previously reported effects of violent videogames on social behaviour.
Specifically, our experiments were designed to replicate a previous study, which aimed to demonstrate how playing violent videogames can influence people to be less helpful.
The primary measure was whether participants would help an experimenter when he pretended to accidentally spill some pens on the ground: did the participant help gather them or not?
In our attempts, we weren’t able to recreate the findings. In addition, we found that it didn’t matter whether participants played a violent or non-violent videogame; they were equally likely to help the experimenter in this unexpected mishap.
What can we take from these failures to replicate? Well, previous failures to replicate in other literatures, such as behavioural priming, have been blamed on subtle contextual differences between the original and replication studies.
Behavioural priming is the idea that implicitly activating concepts in a person’s mind (such as a concept of “the elderly”) leads to measurable changes in behaviour (such as walking slower).
It’s plausible, therefore, that behavioural primes are so subtle (or from a researcher’s perspective, finicky) that they can easily be washed out by contextual differences between experiments, which operate as other primes (for example, an uncomfortably warm room might prime someone to walk faster, effectively nullifying the elderly prime).
What was perhaps most interesting about our experiments was that, while we (surprisingly) couldn’t show that violent videogames affected our participants, we did show that slight differences in how we ran the experiment had a pretty big effect on helping behaviour, confirming that slight differences in methodology can prime behaviour in unexpected ways.
We found that if we measured helping behaviour during the middle of the experiment, instead of at the end, we could boost the number of participants who helped from 30% to 75% - a huge effect!
We think this might be because a participant leaving the experiment has less motivation to maintain a good relationship with the experimenter than a participant who will interact with the experimenter further.
Interpreting the effects
This leaves us with the question of how to interpret reported effects and subsequent failures to replicate. If an effect is strong, then researchers attempting to replicate studies can afford greater differences between their experiments and those demonstrating the original effect.
It appears, however, that violent videogame effects can be very difficult to replicate, which means they likely have very subtle effects on behaviour.
What’s perhaps more enlightening from our experiments is that social interaction has a much larger effect on behaviour than playing violent videogames; that is, something relatively benign (insomuch as it’s considered a normal daily occurrence) can make it very difficult to show that videogames influence behaviour.
This leads us to question how much attention policy makers should pay to violent videogames when their effects on behaviour are so tiny.
How to treat null results
For a number of years the prevailing attitude in the field has been that violent videogames negatively affect social behaviour. Most of the research from the mid-1990s to the mid-2000s pointed towards videogames being bad for us.
So how can this idea be falsified?
You need to publish data that finds violent videogames do not have a negative effect on behaviour.
This leaves us with somewhat of a mess on our hands. Null results don’t necessarily mean an effect doesn’t exist, only that it is difficult to find (because of, for example, differences between experiments).
This means that null results are often easily dismissed and not published. If this is your policy for accepting or rejecting evidence then you will end up with publication bias - a disproportionate number of studies demonstrating an effect, and comparatively few studies reporting a failure to show the effect (like ours).
We can use a technique called a “meta-analysis” to consolidate a literature of effects by finding the average effect size.
But we are still faced with the problem of unpublished null results and a meta-analysis may falsely conclude that there’s evidence that the effect is real because of the skew towards positive results.
This is the situation that the violent videogame literature has faced with for the past few years.
So where are we now? Of course no single study settles anything. What this work highlights is the need for a change in attitude towards null results in the violent videogame literature, especially in the case of direct replications.
Finally, the violent videogame effect might be meaningful for vast population-level policy making (think about public health initiatives where a 1% decrease in mortality - a small effect - is very meaningful). It’s conceivable, however, that violent videogames are only one of many leisure activities that could be associated with small, negative effects on social behaviour (such as combat sports, contact sports or maybe even something as mild as chess - we don’t know!).
We need to be certain the negative effect of violent videogames is sufficiently greater than other leisure pursuits which could also increase aggression.