Just as science has changed the world, so has the world changed science.
Once upon a time, scientists could beaver away in relative isolation pursuing their passion for revealing the mysteries of the world we inhabit. It’s hard not to be swept up in the romantic images of Robert Boyle mixing chemicals in his dingy Oxford lab, or Charles Darwin pottering in his gardens at Down House.
But the working life of a scientist has necessarily changed. Research is often an expensive endeavour, and the private wealth that funded science in the past has diminished.
A large portion of health and medical research, in particular, is funded by the public purse, and there is simply not enough money to cater well to the number of people wanting to conduct science. The squeeze of economic rationalism prioritises science that will see ‘ground-level’ benefits in the short-to-medium term future, despite the heaving weight of evidence that ‘blue-sky’, long-term research delivers just as much, if not more, economic and social gain.
An upside of the refocusing of the scientific dollar – and thus, scientists - is the strong push towards the translation of discovered knowledge into real-world applications. This is the carriage of ‘petri dish discoveries’ to new treatments for disease, or the application of experimental physics towards novel techniques for imaging the body and brain. Provided that this move doesn’t obscure the critical importance of the discovery phase of science, only the formidably stubborn scientist would see this as an unwelcome development.
The path from medical discovery to public policy change is often long and tortuous, with many twists and dead-ends along the way. Progress along this path requires the full and concerted effort of any number of champions. Historically, changes in the public health policy– childhood immunisation, folate in our bread, infant sleeping position - have required the extraordinary confluence of a body of scientific evidence, policy makers and politicians brave enough to act on it, and a public willing to accept it. But at the earliest stages of this process, the advocacy for public policy change is often driven by the original discoverers of the knowledge: the scientists themselves.
This leads me to the question: Is advocacy a core job requirement for scientists?
It may surprise non-scientists to find that this is very much an open question. On one-side, there are those who argue that scientists are trained to create and test hypotheses, and it is up to others to act on the data generated. This position certainly holds some weight in terms of skill-sets; the intricacies of public policy are rarely taught alongside the basics of the scientific method.
On the other side are those who see advocacy as the logical extension of lab-work. If data are generated that provide a potential path towards improving health and well-being, then it is a scientist’s duty to push for that change. Certainly, an ethical argument could be made in favour of this position, particularly if the data were generated using public money. Then there is the other argument for scientists advocating policy implications of their data: if not scientists, then who?
This is a false dichotomy in the best instances, such as when research is conducted in partnership with industry and/or government. Advocacy for the policy implications of data is then often the purview of other members of the broader team, leaving the scientist to test their next hypothesis.
But there still remains an open question about what we expect from science and our scientists. Job descriptions for scientists rarely, if ever, include ‘advocacy’ as a key requirement or performance indicator, yet history tells us that it is often scientists who have been highly effective champions for applying their own data towards policy change.
Should scientists be hypothesis testers, policy advocates or both?
Click here if you would like to be on the mailing list for this column.