Scientists should study pseudoscience – see what the pseudoscientists are up to and perhaps (for a laugh) try a few pseudostudies themselves.
Critically, scientists must learn what really distinguishes science from pseudoscience. We can fall for comforting myths, with pseudoscience being the domain of cat palmists on TV claiming to predict earthquakes with the moon. Amusing, sometimes exasperating, but mostly harmless stuff.
But the most dangerous pseudoscience is not produced amateurish cranks, but by a minority of qualified scientists and doctors. Their pseudoscience is promoted as science by think tanks and sections of the media, with serious consequences.
British doctor Andrew Wakefield’s claims about vaccines and autism continue to impact vaccination rates 16 years on, despite Wakefield being deregistered and his research debunked.
Why do a minority of scientists produce pseudoscience? Clearly some pseudoscience is strongly associated with ideological beliefs, and motivated reasoning can overwhelm data, logic and years of training. Perhaps some scientists get complacent, expecting their hunches to always be correct.
But perhaps there’s another reason that’s closer to home. Is part of the problem how we educate prospective scientists?
Hypothesis
Pseudoscience mimics aspects of science while fundamentally denying the scientific method. A useful definition of the scientific method is:
principles and procedures for the systematic pursuit of knowledge involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses.
A key phrase is “testing of hypotheses”. We test hypotheses because they can be wrong.
Hypothesis testing is the first victim of pseudoscience. The conclusions are already known, and the data and analyses are (consciously or unconsciously) chosen to reach the desired conclusion.
Unfortunately, high school and undergraduate science students may have limited exposure to hypothesis testing. A student laboratory exercise may repeat an experiment from decades ago, which has been simplified for teaching, and whose conclusions are well known.
Such an exercise teaches technical skills at the expense of hypothesis testing. Should we expect students to “get” hypothesis testing without real experience? No, and without real experience of hypothesis testing we may undermine years of education.
Time is of the essence
What is the most time consuming aspect of science? Collecting the data? Producing results?
In a school or university laboratory class, much time is devoted to obtaining the relevant results. However, this doesn’t truly reflect how scientific research is undertaken.
When undertaking scientific research, obtaining a result can be relatively quick. The painful part is cross checking the validity of the result with different experiments and new data, including comparison with already published studies.
Pseudoscience lacks these cross checks. “Discoveries” of alien life appear every year or so in the “Journal of Cosmology”. Inevitably each “discovery” is followed by debunking, showing the “aliens” and “meteorites” have mundane Earthly origins. To a professional scientist, not checking for these obvious and mundane possibilities seems bizarre, but such sloppiness is a hallmark of pseudoscience.
Unfortunately, our teaching laboratory classes don’t always emphasise cross checking. Students often spend most of their time obtaining results, with little time and few marks allocated to validating those results.
Journal articles and media reporting of science also emphasise new results (and understandably so). However, this reporting of science doesn’t reflect how scientists devote their time and effort.
While “the result” is often the prelude to months of painful verification for scientists, are we actually training our students and the public that “the result” is what science is all about?
Nice fit
Fitting mathematical models to data is fundamental to science and its early history. Johannes Kepler’s mathematical laws of planetary motion, developed in the early 17th century, paved the way for Newton’s theories of motion and gravity.
Students often learn (or assume) that the smaller the difference between the data and a model, the better the model. This is often encouraged by the R2 statistic, which is provided by Microsoft Excel spreadsheets. Unfortunately, taken to overly simple extremes, this can lead to problems.
When we look at data, we are often looking at a trend with noise superimposed. For example, maximum temperature gradually increases from winter to summer (trend), but from day-to-day it fluctuates up and down (noise).
We can model the trend with time using a relatively simple function (such as a sine curve), but with more complex functions (like high order polynomials) we can reproduce the fluctuations too. This improvement is largely illusory though, as we are fitting to fluctuations that vary from year to year.
In statistics this sin is known as over-fitting, and its dangers are taught in university courses – but I’ve seen first-hand that students don’t always understand the risks. Perhaps the aesthetic appeal of a model following all data is too great.
Pseudoscience embraces over-fitting in a myriad of ways. Overly complex functions (including artificial neural networks), with no basis in physics, are often fitted to data without caution. Data may be shifted, rejected or filtered without justification.
A common consequence of over-fitting is wild “predictions” based on extrapolating functions (into the future). Time and time again, climate change deniers claimed long-term warming will soon be replaced by exceptionally rapid cooling. Such claims did not come to pass, and current claims (promoted by chairman of the Business Advisory Council Maurice Newman, among others) are just as dubious.
Over-fitting isn’t merely an abuse of statistics, but can influence public debate about science. If we don’t teach students about the risks of over-fitting and statistics abuse, public policy may be damaged.
Go team!
Collaboration is a powerful tool for science, enabling scientists to branch into new disciplines, exchange expertise and reduce errors.
Collaboration is also a powerful weapon against pseudoscience. An astronomer knows that Jupiter and Saturn don’t induce meaningful tides on Earth. An oceanographer knows the strengths and weaknesses of tide gauge measurements.
The flaws of pseudoscience can thrive in the absence of collaboration. The errors in Australian geologist Ian Plimer’s 2009 book Heaven and Earth indicate that Plimer did not collaborate with experts on radiative transfer and astrophysics.
The absence of collaboration by Ian Plimer may be part of a broader pattern. Studies rejecting anthropogenic climate change have an average of 2.0 authors, while studies with no explicitly stated position or endorsing anthropogenic climate change have 3.6 and 3.4 authors. Those who reject climate change collaborate less than other scientists, which can increase the likelihood of errors.
Unfortunately students may have limited experience of collaboration. Students sometimes work in groups of two or three, but these groups often don’t reproduce the dynamics of scientific collaborations.
Students don’t always create their own groups, and they often work with students with similar skills. It is rare for students to create new groups with diverse skills from scratch.
Marking schemes that evaluate performance relative to peers may even actively discourage collaboration and sharing of expertise by students. It may discourage the skills students actually need to succeed in science.
Can we fix it?
How can we educate scientists, while reducing the number of trained pseudoscientists?
We need to make science education more like science itself, and this has been recognised by many science teachers. Students need the time to explore and test multiple plausible hypotheses. We may sacrifice some discipline specific skills along the way, but perhaps this is a price worth paying.
We need to recognise and encourage the cross-disciplinary approach to science. Statistics is sometimes relegated to a few of undergraduate subjects, whereas it really has to be learnt (and relearnt) throughout an education and career. Budding scientists also need to learn about decision making, logic and logical fallacies.
We need to find means of making science education reflect the collaborative nature of scientific research. This does happen for many PhD students, but many undergraduate students don’t get the opportunity to embrace and be rewarded for collaboration.
If we cannot effectively educate our students about the true nature of science, a harmful byproduct will be a trickle of trained pseudoscientists, who will undermine the effectiveness of science in our society into the future.
Editor’s note: Michael will be on hand to answer questions between 4 and 5pm AEDT on November 18. Ask your questions about education and pseudoscience in the comments below.