Science Communication and Scientific Judgment

Naomi Oreskes is Professor of the History of Science and Affiliated Professor of Earth and Planetary Sciences at Harvard University. This is an expanded version of a set of essays first published in Scientific American magazine, including “To Understand How Science Denial Works, Look to History,” “Scientists Should Admit They Bring Personal Values to Their Work,” “If You Say ‘Science Is Right,’ You’re Wrong,” “Expert Opinion Can’t Be Trusted if You Consult the Wrong Sort of Expert,” “Making Vaccines Is Straightforward; Getting People to Take Them Isn’t,” and “Don’t Fact-check Scientific Judgment Calls.” You may follow Prof. Oreskes on Twitter @NaomiOreskes.


The year 2020 was truly a historic one—and mostly not in a good way. Among many things, we saw a historic level of disregard of scientific advice with respect to COVID-19, which made the pandemic worse in the United States than in many other countries. But while the events of 2020 may feel unprecedented, the social pattern of rejecting scientific evidence did not suddenly appear in that year of pestilence. There was never any good scientific reason for rejecting the expert advice on COVID-19, just as there has never been any good scientific reason for doubting that humans evolved, that vaccines save lives, and that greenhouse gases are driving disruptive climate change. 

Past is Prologue

To understand the social pattern of rejecting scientific findings and expert advice, we need to look beyond science to history, which tells us that many of the various forms of the rejection of expert evidence and the promotion of disinformation have roots in the history of tobacco.

Throughout the first half of the twentieth century, most Americans saw science as something that made their lives better. At the same time, corporate America was also developing the playbook for science denial and disinformation. The chief culprit in this darker story was the tobacco industry, whose tactics have been well documented by historians of science, technology, and medicine, as well as epidemiologists and lawyers. It disparaged science by promoting the idea that the link between tobacco use and lung cancer and other diseases was uncertain or incomplete and that the attempt to regulate it was a threat to American freedom. The industry made products more addictive by increasing their nicotine content while publicly denying that nicotine was addictive. With these methods, the industry was able to delay imposing effective measures to discourage smoking long after the scientific evidence of its harms was clear. In our 2010 book, Merchants of Doubt, Erik M. Conway and I showed how the same arguments were used to delay action on acid rain, the ozone hole, and climate change—and starting in 2020 we saw the spurious “freedom” argument being used to disparage mask wearing.

We also saw the tobacco strategy seeping into social media, which influences public opinion and which many people feel needs to be subject to greater scrutiny and perhaps government regulation. Without a historical perspective, we might interpret this as a novel problem created by a novel technology. But in September 2020, a former Facebook manager testified in the U.S. Congress that the company “took a page from Big Tobacco’s playbook, working to make our offering addictive,” saying that Facebook was determined to make people addicted to its products while publicly using the euphemism of increasing “engagement.” Like the tobacco industry, social media companies sold us a toxic product while insisting that it was simply giving consumers what they wanted.

Scientific colleagues often ask me why I traded a career in science for a career in history. History, for some of them, is just “dwelling on the past.” My short answer begins by citing what one of Shakespeare’s characters exclaims in The Tempest: “What’s past is prologue.” If we are to confront disinformation, the rejection of scientific findings, and the negative uses of technology, we have to understand the past that has brought us to this point.

Personal Values vs Value Neutrality

The notion that science is and should remain value-free has complex historical roots and has been challenged over time. Now, as the U.S. recoils from the divisions of recent years and the scientific community tries to rebuild trust in science, scientists may be tempted to reaffirm their neutrality. If people are to trust us again, as I have frequently heard colleagues argue, we have to be scrupulous about not allowing our values to intrude into our science. This presupposes that value neutrality is necessary for public trust and that it is possible. But available evidence suggests that neither presumption is correct. 

Recent research in communications has shown that people are most likely to accept a message when it is delivered by trusted messengers—teachers, for example, or religious or business leaders, or local doctors and nurses. One strategy to build trust, therefore, is for scientists to build links from their laboratories, institutes, and academic departments into the communities where they live and work. One way to do this—in the United States, at least—is by partnering with organizations such as the National Center for Science Education, which was founded to fight creationism in the classroom but is now working broadly with teachers to increase understanding of the nature of science itself. To do this, scientists do not need to throw off their personal values; they merely need to share with teachers a belief in the value of education. This is important because research suggests that, even if we try, we cannot throw off our values.

It is well known that people are more likely to accept evidence that accords with what they already believe. Psychologists call this “motivated reasoning,” and although the term is relatively recent, the insight is not. Four hundred years ago, Francis Bacon put it this way: “Human understanding is not composed of dry light, but is subject to influence from the will and the emotions [...]. [M]an prefers to believe what he wants to be true.”

Some research suggests that even with financial incentives, most people are apparently incapable of escaping their biases. Great scientists may think that because they are trained to be objective, they can avoid the pitfalls into which ordinary people fall. But that is not necessarily the case. Does this mean that science cannot be objective? No. What makes it so is not scientists patrolling their own biases but rather the mechanisms used to ensure that bias is minimized. Peer review is the best known of these, though equally if not more important is diversity. As I contend in the new edition of my book Why Trust Science (2021), diversity in science is crucial not just to ensure that every person has a chance to develop his or her talent but to ensure that science is as unbiased as possible.

Some will argue that value neutrality is an ideal toward which we should strive, even if we know it cannot be achieved entirely. In the practice of science, this argument may hold. But what is useful in scientific research may be counterproductive in public communication because the idea of a trusted messenger implies shared values. Studies show that U.S. scientists want (among other things) to use their knowledge to improve health, make life easier, strengthen the economy through innovation and discovery, and protect people from losses associated with disruptive climate change.

Opinion polls suggest that most Americans want many of these things, too; according to a recent reliable survey, 73 percent of those polled believe that science has a mostly positive impact on society. If scientists decline to discuss their values for fear that they conflict with the values of their audiences, they may miss the opportunity to discover significant points of overlap and agreement. If, on the other hand, scientists insist on their value neutrality, they will likely come across as inauthentic, if not dishonest. A person who truly had no values—or refused to allow values to influence their decision-making—would be a sociopath!

Scientific Method and Communication

Value neutrality is a tinfoil shield. Rather than trying to hide behind it, scientists should admit that they have values and be proud that these values motivate research aiming to make the world a better place for all. Francis Bacon, after all, wrote that the goal of science is the “relief of man’s estate.” 

As the COVID-19 crisis invited onslaughts against their profession, scientists have certainly found inspiration in values to defend their enterprise. But in their zeal to fight back against vaccine rejection and other forms of science denial, some scientists say things that just are not true—and you cannot build trust if the things you are saying are not trustworthy.

For instance, one popular move made by scientists is to insist that science is right—full stop—and that once we discover the truth about the world, we are done. Anyone who denies such truths (they suggest) is stupid, ignorant, or fatuous. Well, no. Even a modest familiarity with the history of science offers many examples of matters that scientists thought they had resolved, only to discover that they needed to be reconsidered. Some familiar examples are the Earth being the center of the universe, the absolute nature of time and space, the stability of continents, and the cause of infectious diseases. Some conclusions are so well established we may feel confident that we will not be revisiting them. I cannot think of anyone I know who thinks we will be questioning the laws of thermodynamics any time soon. But physicists at the start of the twentieth century—just before the discovery of quantum mechanics and relativity—did not think they were about to rethink their field’s foundations, either.

Another popular move is to say scientific findings are true because scientists use “the scientific method.” But we can never actually agree on what that method is. Some will say it is empiricism: observation and description of the world. Others will say it is the experimental method: the use of experience and experiment to test hypotheses. Recently, a prominent scientist claimed the scientific method was to avoid fooling oneself into thinking something is true that is not, and vice versa.

Each of these views has its merits, but if the claim is that any one of these is the scientific method, then they all fail. History and philosophy have shown that the idea of a singular scientific method is, well, unscientific. In fact, the methods of science have varied between disciplines and across time. Many scientific practices, particularly statistical tests of significance, have been developed with the idea of avoiding wishful thinking and self-deception, but that hardly constitutes “the scientific method.” Scientists have bitterly argued about which methods are the best, and, as we all know, bitter arguments rarely get resolved.

In my view, the biggest mistake scientists make is to claim that this is all somehow simple and therefore to imply that anyone who does not get it is a dunce. Science is not simple, and neither is the natural world; therein lies the challenge of science communication. What we do is both hard and, often, hard to explain. The good news is that when we fall flat, we pick ourselves up, brush ourselves off, and get back to work. Understanding the beautiful, complex world we live in, and using that knowledge to do useful things, is both its own reward and why taxpayers should be happy to fund research.

Scientific theories are not perfect replicas of reality, but we have good reason to believe that they capture significant elements of it. And experience reminds us that when we ignore reality, it sooner or later comes back to bite us.

The Political Variable

While saying “science is always right” may be incorrect, so too is repeating the familiar trope: “Experts are always getting it wrong.” History shows that scientific experts mostly get things right, but examples where they have gone wrong offer the opportunity to better understand the limits of expertise. A case in point is the Global Health Security Index (GHSI), the result of a project led by the Nuclear Threat Initiative and the Johns Hopkins Center for Health Security. It was published in October 2019, just weeks before the novel coronavirus made its appearance.

GHSI researchers evaluated global pandemic preparedness in 195 countries, and the U.S. was judged to be the most prepared country in the world. The UK was rated second overall. New Zealand clocked in at number 35. Vietnam was number 50. As ensuing events showed, the experts certainly got that wrong. Vietnam and New Zealand had among the best initial responses to the COVID-19 pandemic; the UK and the U.S. were among the worst.

So what happened? The GHSI framework was based heavily on “expert elicitation”—the querying of experts to elicit their views. (This method contrasts with consensus reports, which are primarily based on a review of existing, peer-reviewed publications.) Expert elicitation is often used to predict risks or otherwise evaluate things that are hard to measure. Many consider it to be a valid scientific methodology, particularly to establish the range of uncertainty around a complex issue or—where published science is insufficient—to answer a time-sensitive question. But it relies on a key presumption: that we have got the right experts.

The GHSI panel was understandably staffed heavily with directors of national and international health programs, health departments, and health commissions. But the experts included no professional political scientist, psychologist, geographer, or historian; there was little expertise on the political and cultural dimensions of the problem. In hindsight, it is clear that in many countries, political and cultural factors turned out to be determinative.

The United States—a country with some of the most advanced scientific infrastructure in the world and a prodigious manufacturing and telecommunications capacity—failed to mobilize this capacity for reasons that were largely political. Initially, then-President Donald Trump did not take the pandemic seriously enough to organize a forceful federal response, and then, by his own admission, downplayed it. America’s layered and decentralized system of government led to varied policies, in some cases putting state governments in conflict with their own cities. And many refused to practice social distancing, interpreting it as an infringement on their freedom.

To evaluate American preparedness accurately, the GHSI group needed input from anthropologists, psychologists, and historians who understood American politics and culture. Around the globe, whether countries were
able to mount an effective pandemic response depended crucially on governance and the response of their citizens to that governance. The GHSI team got it wrong because the wrong experts were chosen.

The Perplexity of Human Behavior  

Just as the experts on the GHSI team failed to consider the relevant and ultimately decisive human element in the COVID-19 battle, the uptake of vaccines proved to be more complicated than simply making the technology available. Vaccine uptake, and especially the widespread acceptance of vaccines, is a social endeavor that requires consideration of human factors. 

However, questions involving human behavior are some of science’s most perplexing. There is a saying in the field of artificial intelligence: “Hard things are easy; easy things are hard.” Activities that most people find very hard, such as playing chess or doing higher mathematics, have yielded fairly readily to computation, yet many tasks that humans find easy or even trivial resist being conquered by machines.

Twenty-five years ago, Garry Kasparov famously became the first world chess champion to lose to a computer. Today, computer programs can beat the world’s best players at poker and Go, write music and even pass the famous Turing test—fooling people into thinking they are talking to another human. Yet computers still struggle to do things most of us find easy, such as learning to speak our native tongue or predicting from body language whether a pedestrian is about to cross the street—something that human drivers do subconsciously. Still, that can stymie even the most advanced self-driving cars.

AI researchers will tell you that chess turned out to be comparatively easy because it follows a set of rigid rules that create a finite (albeit large) number of possible plays. Predicting the intentions of a pedestrian, however, is a more complex and fluid task that is hard to reduce to rules. No doubt that is true, but I think there is a bigger lesson in the AI experience that applies to more urgent problems. Let’s call it the vaccine-vaccination paradox.

Anyone familiar with biology is hugely impressed by the agile scientific work that in under a year yielded astonishingly effective vaccines to fight COVID-19. Both the Moderna and the Pfizer-BioNTech vaccines use messenger RNA (mRNA) to deliver instructions to cells to generate the spike protein found on the novel coronavirus, which prompts the body to make the antibodies needed to fight an actual infection. It is a brilliant piece of biotechnological work that bodes well for similar uses of mRNA in the future.

Yet, even now, after more than a year after those vaccines were cleared for use, it is extremely hard to get the American population fully vaccinated, much less boosted. In the United States, the difficulties have included the vexed politics of the past several years, but the logistical challenges turned out to be great as well. Before the vaccines were authorized, some health experts were concerned that there might not be enough vials and syringes or cold storage. Others noted the problem of vaccine hesitancy. And since the vaccines became available, a host of new problems, including such quotidian tasks as scheduling, have plagued the program. The hard task of creating a vaccine proved (relatively) easy; the easy task of vaccination has proved very hard.

In light of the above, maybe it is time to rethink our categories. We view chess as hard because very few people can play it at a high level, and almost no one is a grand master. In contrast, nearly all of us could probably learn to drive a truck to deliver vaccines. But this perspective confuses difficulty with scarcity. As the AI example shows, many things that all of us can do are in some respects remarkably difficult. Or perhaps we are conflating what is difficult to conceive with what is a challenge to do. Quantum physics is conceptually hard; administering 600 million shots in a large, diverse country with a decentralized health system is a staggeringly difficult practicality. 

We call the physical sciences “hard” because they deal with issues that are mostly independent of the vagaries of human nature; they offer laws that (at least in the right circumstances) yield exact answers. But physics and chemistry will never tell us how to design an effective vaccination program or solve the problem of the crossing pedestrian, in part because they do not help us comprehend human behavior. The social sciences rarely yield exact answers. But that does not make them easy.

When it comes to solving real-life problems, it is the supposedly straightforward ones that seem to be tripping us up. The vaccine-vaccination paradox suggests that the truly hard sciences are those that involve human behavior. 

Don’t Fact-check Scientific Judgment

While the salient issue of our unprecedented times is convincing people of the right facts to get shots in arms, sometimes the struggle is simply deciding on what the facts are. In a world that has become relentlessly “truthy,” to borrow Stephen Colbert’s apt neologism, we need journalists, scientists, and other experts to stand up for facts and keep the public debate honest. But this has proved to be a daunting task, especially with regards to issues such as climate change, where there is a tricky gray zone between facts and expert judgments.

One such zone has been on display since the release of a 2018 Intergovernmental Panel on Climate Change (IPCC) special report entitled Global Warming of 1.5 °C, whose authors concluded that we had 12 years left (now 8) to achieve radical reductions in greenhouse gas emissions to limit global warming. This alert has been widely cited, and politicians who have invoked it have been repeatedly fact-checked. But some of this checking made the dialogue feel more like ice hockey—where the “checking” was intended to disrupt play and establish dominance—than like an effort to help the public understand a complex but crucial issue.

In the 2020 presidential election’s second Democratic debate, for example, former U.S. Representative Beto O’Rourke of Texas said, “I listen to scientists on this, and they are very clear. We don’t have more than 10 years to get this right.” And Pete Buttigieg, at the time the mayor of South Bend, Indiana, said, “Science tells us we have 12 years before we reach the horizon of catastrophe when it comes to our climate.” The New York Times declared that both statements were “misleading,” insisting that any claim “that there are 12 or just 10 years until the point of no return goes beyond what the [IPCC] report itself says.” The Washington Post called 12 years “a figure that is frequently cited but often misused,” implying that Buttigieg was among those referencing it in error. 

But the IPCC was not stating a fact in the first place. It was presenting a collective expert judgment—in this case, the consensus of 86 authors and review editors from 39 countries. Given this accounting, there will inevitably be a range of legitimate interpretations. With the finding understood in this way, the dynamic of fact-checking is misplaced. It would be as if after 9/11, the media were fact-checking how politicians characterized the threat to America.

Moreover, consider the headlines that news outlets themselves offered when the report came out. From the New York Times: “Major climate report describes a strong risk of crisis as early as 2040.” The AP: “UN report on global warming carries life-or-death warning.” And just for fun, here is what the New York Post had to say: “Terrifying climate change warning: 12 years until we’re doomed.”

Call me unfussy, but these headlines do not strike me as substantively different from what the politicians said. They use the same language of crisis, of time limits, and of life and death that the fact-checkers rejected. And contrary to the AP report, scientists did, in fact, agree on a time frame.

Politicians do sometimes say things that are egregiously at odds with expert consensus; the overt denial of climate change is the obvious case in point. We should call out conspicuously false claims, such as an assertion that the world will end tomorrow (it might, but not from climate change), but let’s not fact-check things that are not facts. There is a world of interpretation—and therefore a range of justifiable readings—built into any expert judgment. We should discuss that reasonable range and flag claims that are obviously unreasonable. But we should not confuse judgments with facts. Doing so turns what should be a serious discussion into a score-driven hockey brawl.

The same argument, of course, can be made with regards to the vaccine issue and pretty much every other aspect of the fight against COVID-19. And that’s the overall point of this essay. But at the end of the day, discounting much less disregarding expert judgment on a pandemic or any other issue that requires scientific as well as public policy input will do much, much more harm than good. What is my evidence? Well, again let me quote from The Tempest: “What’s past is prologue.”

Back to Table of Contents