We take science seriously at The Conversation and we work hard at reporting it accurately. This series of five posts is adapted from an internal presentation on how to understand and edit science by Australian Science & Technology Editor, Tim Dean. We thought you would also find it useful.
You may have heard the advice for pregnant women to avoid eating soft cheeses. This is because soft cheeses can sometimes carry the Listeria monocytogenes bacteria, which can cause a mild infection. In some cases, the infection can be serious, even fatal, for the unborn child.
However, the infection is very rare, affecting only around 65 people out of 23.5 million in Australia in 2014. That’s 0.0003% of the population. Of these, only around 10% are pregnant women. Of these, only 20% of infections prove fatal to the foetus.
We’re getting down to some very small numbers here.
Even among the “high risk” foods, like soft unpasteurised cheese or preserved meats, Listeria occurs in them less than 2% of the time. And even then the levels are usually too low to cause an infection.
So why the advice to avoid soft cheeses? Because the worst case scenario of a listeriosis infection is catastrophic. And pregnant women are 13 times more likely to contract listeriosis than an otherwise healthy adult. Thirteen times!
Now, it’s entirely reasonable for a pregnant woman to choose to avoid soft cheeses if she wants to lower her risk as much as possible.
But when it comes to talking about or reporting risk, there’s clearly a vast conceptual – or perceptual – disconnect between the impact of “13 times more likely” and the absolute probability of contracting listeriosis and having complications during a pregnancy.
If we talked about every risk factor in our lives the way health authorities talk about soft cheeses, we’d likely don a helmet and kneepads every morning after we get out of bed. And we’d certainly never drive a car.
The upshot of this example is to emphasise that our intuitions about risk are often out of step with the actualities. So journalists need to take great care when reporting risk so as not to exacerbate our intuitive deficits as a species.
For one, use absolute rather than relative risk wherever possible. If you say eating bacon increases your chances of developing colorectal cancer by 18%, it’s hard to know what that means without knowing the baseline chance of developing colorectal cancer in the first place.
Many readers who skim the headline will believe that eating bacon gives you an 18% chance of developing cancer. That would be alarming.
But if you couch it in absolute terms, things become more clear. For example, once you hear that the probability of you developing colorectal cancer some time before the age of 85 is 8.3%, then it’s far more salient to tell people that eating bacon nudges that lifetime probability up to 9.8%. And then it’s also worth mentioning that the majority of people who develop colorectal cancer live on for five or more years.
Bacon is suddenly a lot less of a death sentence, and is largely returned to its prior status as the king of foods.
Examples and analogies also help, but avoid referencing extreme events, like lightning strikes or shark attacks, as they only muddy (or bloody) the water.
And take great caution around the precautionary principle. This amplification of our intuitive inability to grok risk, and our current hyper risk-averse culture, states that in the absence of evidence that something is safe, then we should treat it as dangerous.
It effectively places the onus of proof of safety on the proponents of something new rather than requiring the recipients to prove it’s harmful first.
There are circumstances when the precautionary principle is warranted, such as when a possible outcome is so catastrophic that we ought to play things safe, such as when assessing a new nuclear reactor design, or a new untested technology where there’s some plausible theoretical risk.
But it’s often misused – at least in a rational sense – to put the kibosh on new technologies that are perceived to be unpalatable by some. Popular targets are genetically modified food, nanotechnology and mobile phone radiation.
Strictly speaking, the precautionary principle requires some plausible suspicion of risk. So, as a rule of thumb, if a technology has been in use for some time, and there is no reliable evidence of harm, then the onus increasingly falls on those who believe it’s unsafe to provide evidence to that effect.
That doesn’t mean these things are guaranteed safe. In fact, nothing can be guaranteed safe. Even the IARC list of carcinogens has only one substance in the lowest risk category of “probably not carcinogenic to humans”. That’s a chemical called caprolactam. And even there we’re out of luck; it’s mildly toxic.
But many of us drink alcohol, eat pickles and walk around in the sun – all of which are known to the carcinogenic (although pickles are only group 2B, “possibly carcinogenic”, thankfully), and most of us won’t die as a result.
Risk needs to be put in context. And we should never cultivate the expectation in our audience that things can be made absolutely safe.
That said, we should also keep a critical eye on the safety of new technologies and report on them when the evidence suggests we should be concerned. But risk always needs to be reported responsibly in order to not cause undue alarm.
Balance and debate
Balance if oft touted as a guiding principle of journalistic practice. However, it’s often misapplied in the domain of science.
Balance works best when there are issues at stake involving values, interpretation, trade-offs or conflicts of interest. In these cases there is either no fact of the matter that can arbitrate between the various views or there is insufficient information to make a ruling one way or another.
In these cases, a reporter does their job by allowing the various invested parties to voice their views and, given appropriate care and background information, the reader can decide for themselves whether any have merit.
This might be appropriate in a scientific context if there is a debate about the interpretation of evidence, or which hypothesis is the best explanation for it in the absence of conclusive evidence.
But it’s not appropriate when directly addressing some empirical question, such as which is the tallest mountain in the world. In that case, you don’t call for a debate or take a straw poll, you go out and measure it.
It’s also not appropriate when comparing the views of a scientist and a non-scientist on some scientific issue. An immunologist ought not be balanced with a parent when it comes specifically to discussing the safety or efficacy of vaccines.
Many reporters like to paint a vignette using an individual example. This can be a useful tool to imbue the story with emotional salience. But it also risks the introduction of emotive anecdote into a subject that ought to be considered on the weight of the evidence.
However, balance is called for when going beyond the scientific evidence and speaking to its societal or policy implications. Science can (and should) inform policy debates, to the extent they rely on empirical facts, but policy is also informed by values and involve costs and trade-offs.
Scientists can certainly weigh in on such issues – they are citizens too. But one thing to be wary of is scientists who step outside of their areas of expertise to advocate for some personally held belief. They are entitled to do so. But they are no longer an expert in that context.
Words of caution
There are many words that have a technical sense in a scientific context, or which are used by scientists to mean something different from the vernacular sense. Take care when you use these words, and be sure not to conflate the technical and vernacular uses.
We often see headlines saying “science has proven X”, but you actually rarely hear scientists use this term in this context. This is because “proof” has a specific technical meaning, particularly in mathematics.
Proof means an argument that has established the truth of some statement or proposition. Proofs are popular in maths. But in science, certainty is always just out of reach.
Instead of “proof”, use “evidence”, and instead of “proven” use “have demonstrated” or “have shown”.
Like “proof”, “valid” has a technical meaning in logic. It relates to the structure of an argument. In logic, a valid argument is structured such that the premises imply the conclusion.
However, it’s entirely possible to have a valid argument that reaches a false conclusion, particularly if one of the premises is false.
If I say that all dogs have four legs, and Lucky is a dog, therefore Lucky has four legs, that’s a valid argument. But Lucky might have been involved in a tragic incident with a moving car, and only have three legs. Lucky is still a dog, and would resent the implication she’s not, and would like to have words with you about the premise “all dogs have four legs”.
Avoid using “valid” to mean “true”. If you do use it, keep it in reference to the structural features of a statement rather than its truth. So a “valid position” is one that is at least rational and/or internally consistent, even if other “valid positions” might hold it to be false.
A cure is pretty final. And hearing about them raises hopes, especially where there isn’t one already.
Most research we cover won’t actually “cure” anything. Most of the time it will “treat” it instead.
Besides the usual warning against hyperbole, these terms ought to be used very carefully and sparingly.
There are sometimes breakthroughs, particularly if there has been a longstanding problem that has just been solved. The discovery of gravitational waves was a breakthrough, as it provided solid evidence for a theory that had been bumping around for a century.
Revolutions, on the other hand, are a much bigger deal. Generally speaking, a revolution means a discovery that doesn’t just update our theory about how something works, but it changes the very way we look at the world.
The favourite example is the shift from classical (Newtonian) physics to (Einsteinian) relativity. Relativity didn’t just change the way we calculate how planets move, it changed the way we think about space and time themselves, altering the very assumptions we hold when we conduct experiments.
Another example might be when looking at a large domain of research that has had a profound change on the way a particular discipline works. You could say that evolution has revolutionised biology, or that DNA sequencing has caused a revolution in genetics.
Few discoveries are revolutionary. The less frequently you use the term, the more impact it’ll have when you do.
As mentioned above, debates are for values, not empirical fact. If there’s a debate, be sure to frame it appropriately and not to imply that there can be a genuine debate between fact and opinion.
Illustrating some science stories is easy. It’s a story about a frog; use a picture of a frog. But other stories are harder, and there will be temptation to cut corners and/or rely on well trodden clichés. Here are some to avoid.
Only use images of Einstein when talking about Albert Einstein. Few scientists these days look like Einstein, and his face should not be the stereotype for genius. Definitely never use clipart illustrations of Einstein-like characters to represent “scientist”.
Some researchers are old white men. But certainly not all. If you’re using a generic scientist, don’t lean on the old white guy in lab coat, be creative.
Some researchers wear lab coats. Not all are white. Use photos of people in lab coats if they’re actually of the researchers in the story. Don’t just go to a stock image library and pick a random model wearing a white lab coat and pat yourself on the back for a job well done.
Avoid stock shots of people pouring colourful liquid into flasks. If it looks like cordial, then it probably is.
And, above all, avoid the “mad scientist”. It’s one of the most corrosive, and pervasive, tropes about science, and we would all benefit from eroding its influence.
Also, be mindful that some images you might consider to be generic actually pack in a lot of technical detail. For example, DNA has a clockwise twist (imagine turning a screwdriver clockwise, the motion your hand makes as it moves forward forms a clockwise helix like DNA). The image above is wrong.
So don’t just chuck any old stock image of DNA into an image. Far too many – even from reputable sources – are wrong. And definitely don’t take DNA and have the designer mirror it because it looks better that way. And above all, don’t then stick that image on the cover of a magazine targeted at geneticists. You’ll get letters. Trust me.