‘The advice is for babies to sleep on their back? In my day it was the exact opposite! Just do what feels right to you.’
‘Last week it was “don’t wear masks”, this week it’s “wear masks” – these scientists just can’t be trusted!’
Sometimes it feels like we are constantly being bombarded with conflicting scientific advice. It’s tempting to respond – as many do – by throwing up our hands and resolving to decide for ourselves what’s best. If scientists keep changing their minds, how can their advice be trusted?

Scepticism and distrust in science has become increasingly common. This was particularly evident during the Covid-19 pandemic, with a sizeable minority of people opting to trust their own judgements, rather than the judgements of scientific authorities, on whether they needed to follow social distancing rules or get vaccinated.
Evidently attitudes of distrust in science are dangerous for those individuals who hold such attitudes, who might (for example) suffer poor health as a result of ignoring dietary advice. But since we are increasingly faced by problems that require collective action, and collective action relies on there being a broad consensus of belief in the relevant scientific conclusions, these attitudes pose a danger to everyone.
We might initially think that the solution to increasing distrust in science is more science teaching in compulsory education. If, for example, people had a better understanding of how diseases spread and why control measures work, they might be more willing to follow advice on mask-wearing. But it is not clear that imparting additional scientific facts would result in citizens affording scientific conclusions the status that they ought to in their decision-making. In our fast-moving world, we are constantly confronted by new and sometimes unexpected problems. This makes it difficult to predict accurately what factual information needs to be on the curriculum in order for people to be equipped with the right knowledge (although perhaps we can be fairly sure that the number of protons in helium will not be crucial). And even if we were able to predict the future accurately, it is simply not possible to put all of these problems on the curriculum with the level of depth needed for everyone to understand them. This is why we have the division of labour that we do, where scientists become experts in highly specific areas that require a high degree of complex technical knowledge. What is needed instead is for people to understand better the nature of science – including the methods and processes by which scientific progress is made – so that they can trust in the institution(s) of science and follow the recommendations that scientists make, even where they do not understand the reasons behind the recommendations.
In what follows, I first provide some support for the view that there are deficiencies in the public’s understanding of the nature of science, before offering some tentative suggestions for how this situation might be ameliorated.
Diagnosing the Problem
The charge of a widespread misunderstanding of the nature of science is not new. Iris Murdoch complained in 1970 of ‘the domination of inexact ideas of science’ and Mary Midgley frequently attacked what she regarded as the widespread ‘worship of science’ and the ‘grotesquely exaggerated notion of what it is and what it does’. The cause of their worry was not that these attitudes were causing mistrust, but rather over-reach: people thought that science could and would answer every possible question, including questions relating to meaning and ethics. My proposal is that the ‘worship of science’ and science denialism are two sides of the same coin, in that both result from seeing science as all-encompassing, infallible and unchanging. One thing that has changed since the 1970s is increased public access to scientific findings. This availability has shattered the myth of science as delivering a unified, immutable set of truths. The worshipful attitude has been turned on its head, for if ‘what science says’ changes, then science fails to be what it was (mistakenly) claimed to be.
These attitudes may in part be a hangover from the popularity of logical positivism, a movement which regarded statements as meaningful only if they are tautologies or are (in principle) empirically verifiable. The central statement of this view was found in A. J. Ayer’s Language, Truth and Logic (1936), which was one of the most widely read works of philosophy amongst the general public at the time. Ayer links meaningfulness to scientificness: he insists that non-trivial statements are meaningful only if they can be scientifically, or at least observationally, confirmed or disconfirmed. This pushed science to a position of unique authority, since it is only through scientific observation that meaningful statements can be delivered.
‘One thing that has changed since the 1970s is increased public access to scientific findings. This availability has shattered the myth of science as delivering a unified, immutable set of truths.’
The prevalence of mistaken views about the nature of science has also been stoked by the manner in which scientific research is often presented in the media. It is not unusual for the results of a single study to be reported with the title ‘what science says’. If you haven’t noticed this before, try googling ‘here’s what science says about’ and notice how frequently this appears in headlines. Reporting like this is problematic, since it incorrectly gives the impression of a final answer, delivered by a unitary body, ‘science’.
Research by myself and others suggests that school-age children already have these skewed views of the nature of science. In a small-scale focus group study with secondary school students, I found that students strongly associated science with certainty: Science gives facts, which in the students’ words are ‘100% true’, ‘proven’ and ‘definite’. The student participants even designated mathematical statements (unprovable by scientific means) as ‘scientific’, because of their certainty. The facts of science were contrasted with ‘opinions’, which students saw as marked out by their lack of consensus. Although the students were sure of science’s uniquely high epistemic status and authority, they struggled to say what gives it this status. In their words, science has its privileged status ‘because it’s science’ and ‘because it’s right’.
This worshipful attitude is dangerous because, as we have seen, it is quickly shattered when ‘what science says’ changes. This suggests that we need a better, more accurate understanding of what science is, one that can withstand scientists sometimes getting things wrong.
The Solution: A Better Understanding of the Nature of Science
One means to improve the public picture of science is through education. The call for more reflection on the nature of science is not new. In England, the National Curriculum for primary schools requires that students ‘develop understanding of the nature, processes and methods of science’. However, this is not taught as a topic in and of itself and in fact the curriculum specifies that ‘working scientifically’ is not to be taught as a separate strand, but rather should be woven into other content. Yet research suggests that teaching on the nature of science is most effective when accomplished via explicit strategies. What should these strategies be aiming for students to understand about the nature of science, and how might these aims be fulfilled?
First, students need to understand the nature of science as delivering probabilistic rather than ‘100% true’ conclusions. Students can be introduced in their science lessons to the distinction between inductive and deductive arguments, learning that the former category (under which scientific arguments fall) deliver only probabilistic conclusions. This distinguishes scientific conclusions from mathematical and logical proofs. Understanding this distinction helps dispel the myth of certainty in science, for even conclusions that are so probable that they are taken to be ‘certain knowledge’ in common parlance are not guaranteed to be true in the way that the conclusion of a sound deductive argument is.
Of course, even though their truth isn’t guaranteed in the way that the truth of mathematical claims is, many scientific claims are certain in the sense of ‘beyond reasonable doubt’. For example, the claims that ‘the Earth goes round the Sun’ and that ‘germs cause illness’ have proof beyond reasonable doubt. In these and many cases, it would clearly be irrational for someone to suggest that scepticism is the right response.
And even where there is significant uncertainty remaining, it is still an inappropriate response to throw up our hands and say ‘anything goes’ in response. Comprehending that we can have probabilistic conclusions even in the face of uncertainty helps us see this. For example, there is disagreement about how fast the planet is warming, with different models delivering different results. There is a range of possible scenarios and outcomes, with quantitative probability estimates attached to each of them. Since all sensible models point towards dangerous warming, the reaction that ‘we just don’t know, so let’s just keep going as we are’ does not make sense. Rather, we can still safely make comparative judgements (e.g. the more swiftly we stop emitting greenhouse gases, the less hostile the environment will be). Looking at examples such as these helps develop the understanding that even where ‘science’ has not been able to make an infallible pronouncement on a matter and where uncertainty remains, we can still make more or less wise decisions about how to act.
Looking at examples from the history of science can help bring out that scientific conclusions always face the possibility of revision in light of new evidence, or as a result of the re-interpretation of existing evidence, or because a new theory is shown to generate more accurate predictions and have greater explanatory power. For example, students could learn about and reflect on the story of how the Earth-centric picture of the universe was overturned by the Sun-centred theory of the universe. Similarly, the story of the gradual acceptance of Big Bang theory shows how empirical and theoretical support can mount for a hypothesis, making it more and more probable, until eventually it becomes part of the ‘scientific mainstream’.
It would also be helpful for people to have a better understanding of how probabilities function in the move from research findings to recommendations for practice. Research can indicate that, for example, babies are less likely to die from cot death if sleeping on their back, or that Covid-19 transmission rates are reduced when people are masked and vaccinated, without this meaning that cot death and Covid-19 are eliminated when the advice is followed. Understanding, through the use of relevant examples, how statistics obtained through scientific research inform medical advice is important. For example, students could look at a case study where they follow the entire process, from identifying a problem (e.g. Covid-19 illness), to relevant hypotheses (e.g. transmission is reduced by mask usage), to examples of research studies (including those delivering conflicting conclusions, and a discussion of how these different results might have arisen and how they might be explained), to the advice eventually given (e.g. wear masks) and how it was justified (including, as I will discuss shortly, how non-scientific reasons always feature in decision-making about what public advice to offer or rules to implement). Understanding these processes would help future citizens better understand the nuance that is often there in advice that stems from scientific research. It would also better equip citizens to subject proclamations by Government of ‘following the science’ to critical scrutiny, making the citizenry less vulnerable to manipulation by the state.
Second, it is important for students to recognize that there is no one, single fixed and immutable authority ‘science’. Rather, there is a changing, historically located, socially situated corpus of scientists. Currently, school curricula focus on imparting facts that are the subject of scientific consensus, when perhaps it might be more helpful to spend time looking at examples where scientists do not agree on the answer, such as where there is a lack of scientific consensus. Students might look at recent examples of conflicting conclusions on a topic: debates over how common life is elsewhere in the universe, for example, or what killed the Neanderthals. It might also be useful to look at historical examples of where there have been disagreements within the scientific community. The battle between those who viewed the universe as eternal and those who subscribe to some sort of Big Bang theory is a good example. This helps demonstrate that there can be messy disagreements in science, and that scientists can get things wrong. The views of both individual scientists and ‘the institution of science’ can change.
‘That science alone does not offer political answers is backed up by research suggesting that attitudes to climate change and climate policy are insensitive to levels of scientific training.’
Third, whilst it is important that the expertise of scientists is recognized and that their views should prima facie be regarded as having authority, students must be taught that the proclamations of scientists can, and sometimes should, be subjected to critical scrutiny. Students should be taught about the problematic incentive structure that exists in the publishing of scientific research (e.g. null findings are less likely to be published) and in the funding of scientific research (e.g. research on the diseases of the rich are more likely to be researched, because they are more likely to make money for Big Pharma). And of course the scientists themselves are only human. They act (at least to some extent) on the basis of self-interest, and are vulnerable to the negative effects of cognitive biases. Students should therefore learn about different forms of bias and how these might infect scientific work. They might learn about cases where scientists have made errors in their reasoning and/or where faulty proclamations have been made in the press. To give just one example, in 2022 The Economist reported a new study with the headline that syringe exchange programmes ‘kill drug users’. But the analysis in the original study ignores ‘survivor bias’ – the fact that it only uses data from needle exchange centres that are still around, which are in the areas with most drug use. Since there is an absence of data about needle exchange centres that have closed, ‘attending fully to the data’ cannot help us here. By being exposed to examples such as these, students can become ‘bullshit detectors’, better able to subject the conclusions of scientific research, and especially media reporting of these conclusions, to scrutiny. (And of course, more, but also better, scientific research is another way to expose bad science, demonstrating in what ways biases might have negatively affected earlier research.)
Fourth, students need to be taught in a way that inculcates virtues of epistemic humility, with attitudes of humility directed not only at scientific conclusions themselves (for the reasons discussed above), but also at the scope and possibilities of science. Whilst, as Murdoch said, ‘Science can instruct morality at certain points and can change its direction’, on its own it cannot deliver action-guiding solutions to the problems we face. Its authority does not extend to the moral and political realm, for even the most concrete findings of science cannot tell us what we ought to do. For that, we always have to bring in normative considerations. For example, even if research was unified in showing us that a twenty-week-old foetus experiences pain in the way that a newborn baby does, this would not tell us whether abortion is morally permitted. To be able to draw conclusions about how individuals ought to act and what policies a state ought to adopt, we need to engage in moral reasoning too (often in conjunction with conclusions from social sciences).
That science alone does not offer political answers is backed up by research suggesting that attitudes to climate change and climate policy are insensitive to levels of scientific training. If just having scientific knowledge was enough, we would all agree on what to do. Students need to practise debating moral and political questions, bringing various considerations to bear and weighing these up. This points towards two implications beyond the science curriculum. First, we need to do more to de-compartmentalize the curriculum in schools, so that there are greater possibilities for interdisciplinary learning and debate. There also needs to be more time in the curriculum devoted to a subject that allows for interdisciplinary debate (‘philosophy’, ‘current affairs’, or whatever we want to call it). Second, an over-emphasis on STEM subjects at the expense of other subjects may be dangerous, for we need to be able to engage in moral analysis to be able to make use of the conclusions that science and technology give us.
Conclusion
During the Covid-19 pandemic, UK politicians spoke of ‘following the science’ and making policy based on ‘the science’. In one sense, this was good for public understanding of science, as it demonstrated that scientific consensus can change. But this was difficult for the public to accept, because they held a mistaken view about science as a unified body of unchanging facts. Additionally, this government strategy incorrectly gave the impression that it was reporting ‘pure science’ and making decisions on this basis, masking that policy decisions are of course political decisions. This further fuelled scepticism about the authority of science.
To help provide a corrective, schools need to do more to explicitly address the nature of science in the curriculum. This should be prioritized over knowledge of scientific facts such as the number of protons in helium. Whilst we want people to see science as authoritative and to respect the expertise of scientists, we also want them to understand that there is no unified thing ‘science’, but rather a set of probabilistic conclusions, from which (in conjunction with the conclusions of the social sciences and moral reasoning) we can get the best available guidance for the time. Science is messy, imperfect, and open to revision and update. If people had this understanding of science as making claims that are provisional, their faith in the whole enterprise would be less likely to be undermined when ‘the science’ changes. In spite of the messiness of science, we can still get warranted knowledge-claims, which ought to hold authority in the public mind – for out of this messiness comes informed consensus, and advice worth following.
Acknowledgement
This work was completed whilst funded by a British Academy Postdoctoral Fellowship (Grant Number: PF21\210082). I thank the British Academy for their financial support.