I’ve heard that we should stop talking about “pure” science and “applied” science; that we should only be talking about “good” science and “bad” science. Last year, CSIRO Chief Executive Megan Clark said as much during question time at her [National Press Club address](http://www.abc.net.au/news/2012-09-05/national-press-club-megan-clark/4244598](http://www.abc.net.au/news/2012-09-05/national-press-club-megan-clark/4244598 ), and this year I heard it recommended again at the Universities Australia Conference. So let’s talk good and bad.
Defining good and bad
Bad science is easy to spot: poorly-controlled experiments, bias or mistakes in interpretation, selective use of data to support a pre-determined viewpoint, and so on. We can look for bad science wherever there is very strong and specific self-interest.
Bad science is a big problem but it is usually exposed – eventually.
Good science is harder to define. Think about mathematical research rather than experimental science. We can agree what bad mathematics is – at least at an elementary level. It is incorrect mathematics.
But what is good mathematics? Or rather, what mathematics is really good? What is high quality maths?
Asking the difficult questions
What is quality? What is art? What makes a book, film or song good?
One can at least get rapid feedback about books, films and songs by counting how many people pay for them or watch them. Not perfect measures I know – should Gangnam Style top the list?
But at least these popularity metrics provide a sort of working guide that can be refined over the centuries by public debate and expert criticism. Similarly, one can get an idea about what is good science by looking at the peer and public response, and the contribution of the research to society – but that takes a long time. Too long.
Unfortunately, in the case of scientific research, society has to commit to major public funding up front – in contrast, a lot of art, books and songs get started on lesser public funding; and films sometimes attract private investors.
Strong cases can also be made for the public funding of large-scale engineering infrastructure, but blue-sky science (where practical, real-world applications are not apparent at the outset) presents a challenge.
We need to decide and make choices
In research, questions and curiosity tend to multiply. There are always more good ideas than dollars to fund them. No-one really wants to make the tough choices, but we have to. So obviously we should just support good science, right?
But that’s the problem. It all looks good. Most established scientists, as with other professionals, are good. To be a lead investigator you typically do science at school and succeed, then at university and succeed, then you do a doctorate and then a post-doctoral training period. You are surrounded by other experts and you learn from them constantly.
At every step of this long road the competition is intense. The competitors are mostly driven by a “love of science”. So the world’s scientists are both highly motivated and proficient – much like elite athletes, top artists, musicians, writers, film makers or top engineers. The quality may vary a little but no professional scientist I’ve met is not “good”.
What about good public grant applications?
Because of budget limitations, many grant funding bodies support only about 20% of proposals. Those proposals are about exploring the unknown. One is not awarding a contract to build a bridge between two known points. When dealing with the unknown it is hard to know what will be good.
The difficulty of the task means one can’t just look at the title of a research proposal and decide on its relevance and impact. But peer-review committees can use their specific expertise to look deeply and rank proposals within subject areas.
They can identify clues as to which projects will one day make the greatest contributions to the welfare of society. They help funding bodies decide what to bet on and justify their use of public funds.
Here are some typical questions peer-review committees consider:
1) What is the best possible outcome of the project?
2) Could you explain why the research is in the in the interests of the public?
3) How many other scientists and non-scientists will be interested?
4) Is the feasibility of the project comfortably above zero? (i.e. the idea should be original but not too risky)
5) Are the researchers self-motivated, leading in their field and committed to their question?
6) Is the researcher’s record of productivity relative to opportunity strong and accelerating?
7) Is the field of research competitive and making rapid progress, plateauing or declining?
Obviously, several Nobel prize-winning ideas would have missed out on public funding by those criteria. Every scientist is entitled to a private list of revolutionary blue-sky ideas but they shouldn’t necessarily put them into applications for public funding too early.
Bill Gates has made the interesting point that philanthropists should step in and fund good ideas the public agencies are unlikely to adopt.
Just because an idea doesn’t get funded doesn’t mean it isn’t good.
Ranking track records rather than research proposals
We all know that curiosity-driven blue-sky research can provide the greatest contributions to society. But how can we rank such research when the long-term impact is unknowable?
One approach is to rank researchers. One resorts partly to metrics: papers, citations, grants, research student completions; and partly to other more holistic clues to quality, originality, energy, and so on.
Past performance is a good predictor of future success. I have a very clear list in my head of rising stars, potential Nobel Laureates – even when I don’t have the discipline-specific expertise to really evaluate their project proposals.
Many public funding bodies support researchers rather than projects: the National Health and Medical Research Council (NHMRC) and the Australian Research Council (ARC) have used this strategy for their Fellowship schemes, as have groups such as the Heart Foundation, Cancer Council Australia, and internationally the Howard Hughes Medical Research Institute and the Wellcome Trust.
Of course, this approach works best with established investigators – who have long track records. Thus it is important that alternative mechanisms exists for junior researchers.
The most common mechanism is the provision of training fellowships, possibly linked to mentors who often have the knack of identifying upcoming talent in their disciplines more effectively than broader committees.
So what is good science and what is good art? I don’t think we will always be able to tell up-front.
But when the next grant outcomes are announced I expect that, as usual, many of our best researchers will end up getting funded and their research will ultimately contribute hugely to society.