How we can break free from sexism in science

No really, it’s fine! Shutterstock

Two women recently had their research paper rejected by a science journal based on an incredibly sexist review of their work – an event that has caused outrage on social media. While the journal, PLOS ONE, has apologised and given the authors a second chance, not everyone is as lucky.

The case provides an opportunity for journals to adopt an open peer-review system – a process in which scientists evaluate the quality of other scientists' work – so that reviewers cannot hide behind anonymity. But it also shows it is time to get tough on the widespread biases in universities.

Peer-reviewed publications are the main currency for academics. It is through such publications that academics tell the world about their latest research findings. Decisions about hiring – and academic career progression – are also made largely on an academic’s publication record. The main purpose of peer review is to act as quality control, making sure the work is technically sound before a paper is made available to the public.

Peer review is clearly something that we need to get right. Ask any researcher though, and you would be hard pressed to find someone who hasn’t had an unhappy review experience.

Unreported numbers?

Occasionally we see dramatic examples of malpractice. In the most recent case, a paper that investigated gender biases in academia based on a survey of PhD students in the life sciences was rejected by PLOS ONE on the basis of a single review. The review was a tirade of undisguised sexism, which suggested that the authors had misinterpreted the results because they are women. It concluded: “It would probably also be beneficial to find one or two male biologists to work with … in order to serve as a possible check against interpretations that may sometimes be drifting too far away from empirical evidence.”

In this case, multiple aspects of the peer-review system failed. The academic editor assigned to the paper was an immunologist, whereas the paper was in the social sciences, bringing into question the editor’s expertise and ability to choose suitable reviewers. Another problem was that only a single review was obtained – usually two or three reviewers are sought to try to obtain balance. It also seems that the editor had not carefully read the review and/or paper, as the review was forwarded without criticism. The editor’s rejection note read:

The qulaity (sic) of the manuscript is por (sic) with issues on methodologies and presentation of resulst (sic). A precise bibliographic search will be useful to improve the manuscript. A clear summary of the issues concerning the quality of this manuscript is given by one reviewer.

Rightly, the journal has issued an apology, the paper is back under review and the original editor and reviewer are no longer on the books.

There may be a number of unreported cases out there. Shutterstock

But while this case was corrected, many are not. A similar level of online rage was directed at the Royal Society which earlier this year awarded only two of 43 fellowship grants to female applicants. By their own admission, this bias appears to be getting worse each year.

The bigger picture

These recent examples speak of gender biases that are routinely found in academia, whether in grant allocation, hiring, mentoring, reference letters, salaries, invited journal articles or even student feedback.

We also know that gender biases are only the tip of the iceberg – in particular remarkably little attention is given to racial discrimination. There are substantially fewer studies on racial bias in academia, but there are similar examples of dubious peer review and there is evidence of racism in article citations and willingness to mentor students simply based on name.

We must use these cases to look at how we can improve the situation. Many journals (including PLOS ONE) operate a single-blind review system where the reviewer can see the authors' names, but the authors never see the reviewer’s name. In some disciplines double-blind review is standard, where the authors' names are hidden from the reviewers.

This approach does address some of the problems, but in practice it is often possible to guess who the authors are. Some journals now offer open review, where reviewers sign their comments with their name and/or the review is made publicly accessible. A further step still is to have post-publication review, where all articles are first published and then peer review occurs in public. Indeed PLOS ONE recently announced that they are aiming to move towards open review.

Besides innovations in the peer-review system, we must also all look in the mirror. The system is made up of individuals. It is us who are biased. Studies show that women are no less likely to discriminate against women – and those of under-represented races are no less likely to have racial biases.

Cognitive biases are so numerous and universal that at the very least we should make ourselves aware of how deeply they can run. Online tests of implicit bias are a great way to start gaining some self-awareness. Institutional training and national programmes to address biases will undoubtedly also help.

In academia, strong hierarchies and nepotism compound problems associated with biases. For faster change, each and every one of us need to act as exemplars – admitting to our own mistakes, calling out those of others and monitoring biases in journals and institutions.

The stakes are higher than most of us realise. Biases in academia distort research outcomes – and can even damage human health. If we continue to ignore our biases then we will continue to stifle the insight we could be gaining from a more diverse set of collaborators. Ultimately we all suffer.