We take science seriously at The Conversation and we work hard at reporting it accurately. This series of five posts is adapted from an internal presentation on how to understand and edit science by Australian Science & Technology Editor, Tim Dean. We thought you would also find it useful.
The first four posts in this series covered the scientific method and practical tips on how to report it effectively. This post is more of a reflection on science and its origins. It’s not essential reading, but could be useful for those who want to situate their science reading or writing within a broader historical and conceptual context.
Fair warning: it’s going to get philosophical. That means you might find it frustratingly vague or complicated. If you find yourself getting infuriated at the inability to settle on clear definitions or provide clear answers to important questions, that’s a perfectly natural (and probably quite healthy) response.
These issues have been intensively debated for hundreds, if not thousands, of years, without resolution. We’d likely have given up on them by now, except that these concepts have an unfortunate tendency to influence the way we actually do things, and thus retain some importance.
The foundations of science
Explaining what science is, and entertaining all the debates about how it does or should work, would take up an entire book (such as this one, which I highly recommend). Rather than tackling such issues head-on, this section will give a broad overview of what science is.
While it doesn’t get mentioned often outside of scientific circles, the fact is there is no one simple definition of science, and no single definitive method for conducting it.
However, virtually all conceptions of science lean on a couple of underlying philosophical ideas.
The first is a commitment to learning about the world through observation, or empiricism. This is in contrast to alternative approaches to knowledge, such as rationalism - the notion that we can derive knowledge about the world just by thinking about it hard enough - or revelation - that we can learn from intuition, insight, drug-induced hallucinations, or religious inspiration.
Another philosophical basis of science is a commitment to methodological naturalism, which is simply the idea that the best way to understand the natural world is to appeal to natural mechanisms, laws, causes or systems, rather than to supernatural forces, spirits, immaterial substances, invisible unicorns or other deities.
This is why scientists reject the claim that ideas like creationism or intelligent design fall within the purview of science. Because these ideas posit or imply supernatural forces, no matter how scientific they try to sound, they break methodological naturalism, so they aren’t science.
(As a side point, science doesn’t assume or imply the stronger claim of philosophical or ontological naturalism. This is the idea that only natural things exist - which usually means things that exist in spacetime - and that there are no supernatural entities at all.
This is a strictly philosophical rather than scientific claim, and one that is generally agreed to be beyond the ken of science to prove one way or the other. So, if cornered, most scientists would agree it’s possible that intangible unicorns might exist, but if they don’t exist in spacetime or causally interact with things that do, then they’re irrelevant to the practice of science and can be safely ignored. See Pierre Laplace’s apocryphal – but no less cheeky – response to Napoleon, who remarked that Laplace had produced a “huge book on the system of the world without once mentioning the author of the universe”, to which Laplace reputedly replied: “Sire, I had no need of that hypothesis.”)
This is where we come to the role of truth in science: there isn’t any. At least in the absolute sense.
Instead, science produces facts about the world that are only held to be true with a certainty proportional to the amount of evidence in support of them. And that evidence can never give 100% certainty.
There are logical reasons for this to be the case, namely that empiricism is necessarily based on inductive rather than deductive logic.
Another way to put it is that no matter how certain we are of a particular theory, and no matter how much evidence we’ve accrued to support it, we must leave open the possibility that tomorrow we will make an observation that contradicts it. And if the observation proves to be reliable (a high bar, perhaps, but never infinitely high), then it trumps the theory, no matter how dearly it’s held.
The Scottish philosopher David Hume couched the sceptical chink in empiricism’s armour of certainty like this: all we know about the world comes from observation, and all observation is of things that have happened in the past. But no observation of things in the past can guarantee that things in the future will operate in the same way.
This is the “problem of induction”, and to this day there is no decisive counter to its scepticism. It doesn’t entirely undermine science, though. But it does give us reason to stop short of saying we know things about the world with absolute certainty.
Scientific progress
The steady accumulation of evidence is one reason why many people believe that science is constantly and steadily progressing. However, in messy reality, science rarely progresses smoothly or steadily.
Rather, it often moves in fits and spurts. Sometimes a new discovery will not only change our best theories, it will change the way we ask questions about the world and formulate hypotheses to explain them.
Sometimes it means we can’t even integrate the old theories into the new ones. That’s what is often called a “paradigm shift” (another term to avoid when reporting science).
For instance, sometimes a new observation will come along that will cause us to throw out a lot of what we once thought we knew, like when the synthesis of urea, of all things, forced a rewrite of the contemporary understanding of what it means to be a living thing.
That’s progress of a sort, but it often involves throwing out a lot of old accepted facts, so it can also look regressive. In reality, it’s doing both. That’s just how science works.
Science also has its limits. For one, it can’t say much about inherently unobservable things, like some of the inner workings of our minds or invisible unicorns.
That doesn’t mean it can only talk about things we can directly observe at the macroscopic scale. Science can talk with authority about the microscopic, like the Higgs boson, and the distant, like the collision of two black holes, because it can scaffold those observations on other observations at our scale.
But science also has limits when it comes to discussing other kinds of things for which there is no fact of the matter, such as like questions of subjective preference. It’s not a scientific fact that Led Zeppelin is the greatest band ever, although I still think it’s a fact.
There are similar limits when it comes to moral values. Science can describe the world in detail, but it cannot by itself determine what is good or bad (someone please tell Sam Harris – oh, they have). To do that, it needs an injection of values, and they come from elsewhere. Some say they come from us, or from something we worship (which many people would argue means they still come from us) or from some other mysterious non-natural source. Arguments over which source is the right one are philosophical, not scientific (although they can be informed by science).
Science is also arguably not our only tool for producing knowledge. There are other approaches, as exemplified by the various non-scientific academic disciplines, like history, sociology and economics (the “dismal science”), as well as other domains like art, literature and religion.
That said, to the extent that anyone makes an empirical claim – whether that be about the movement of heavenly bodies, the age of Earth, or how species change over time – science has proven to be our best tool to scrutinise that claim.