Global Warming Deniers Aren't "Experts" At All: It's Time for a New View of Science

Imagine a gigantic banquet. Hundreds of millions of people come to eat. They eat and drink to their hearts’ content— eating food that is better and more abundant than at the finest tables in ancient Athens or Rome, or even in the palaces of medieval Eu rope. Then, one day, a man arrives, wearing a white dinner jacket. He says he is holding the bill. Not surprisingly, the diners are in shock. Some begin to deny that this is their bill. Others deny that there even is a bill. Still others deny that they partook of the meal. One diner suggests that the man is not really a waiter, but is only trying to get attention for himself or to raise money for his own projects. Finally, the group concludes that if they simply ignore the waiter, he will go away.
This is where we stand today on the subject of global warming. For the past 150 years, industrial civilization has been dining on the energy stored in fossil fuels, and the bill has come due. Yet, we have sat around the dinner table denying that it is our bill, and doubting the credibility of the man who delivered it. The great economist John Maynard Keynes famously summarized all of economic theory in a single phrase: “There is no such thing as a free lunch.” And he was right. We have experienced prosperity unmatched in human history. We have feasted to our hearts’ content. But the lunch was not free.
It’s not surprising that many of us are in denial. After all, we didn’t know it was a banquet, and we didn’t know that there would be a bill. Now we do know. The bill includes acid rain and the ozone hole and the damage produced by DDT. These are the environmental costs of living the way citizens of the wealthy, developed nations have lived since the Industrial Revolution. Now we either have to pay the price, change the way we do business, or both. No wonder the merchants of doubt have been successful. They’ve permitted us to think that we could ignore the waiter while we haggled about the bill.
The failure of the United States to act on global warming and the long delays between when the science was settled and when we acted on tobacco, acid rain, and the ozone hole are prima facie empirical evidence that doubt-mongering worked. Decision theory explains why. In their textbook, "Understanding Scientific Reasoning," Ronald Giere, John Bickle, and Robert Mauldin show that the outcome of a rational decision-theory analysis is that if your knowledge is uncertain, then your best option is generally to do nothing. Doing something has costs -- financial, temporal, or oppor - tunity costs -- and if you aren’t confident those costs will be repaid in future benefits, you’re best off leaving things alone. Moreover, acting to prevent future harm generally means giving up benefits in the present: certain benefits, to be weighed against uncertain gains. If we didn’t know that smoking was dangerous, but we did know that it gave us plea sure, we would surely decide to smoke, as millions of Americans did before the 1960s.
Uncertainty favors the status quo. As Giere and his colleagues put it, “Is it any wonder that those who benefit the most from continuing to do nothing emphasize the controversy among scientists and the need for continued research?”
To change the way the problem of global warming looks, Giere and his colleagues conclude, you’d need “undeniable evidence both that doing nothing will lead to warming and that doing something could prevent it.” But as we have seen, any evidence can be denied by parties sufficiently determined, and you can never prove anything about the future; you just have to wait and see. So the question becomes, Why do we expect “undeniable” evidence in the first place?
The protagonists of our story merchandised doubt because they realized -- with or without the help of academic decision theory -- that doubt works. And it works in part because we have an erroneous view of science. We think that science provides certainty, so if we lack certainty, we think the science must be faulty or incomplete. This view -- that science could
provide certainty -- is an old one, but it was most clearly articulated by the late-nineteenth-century positivists, who held out a dream of “positive” knowledge -- in the familiar sense of absolutely, positively true. But if we have learned anything since then, it is that the positivist dream was exactly that: a dream.
History shows us clearly that science does not provide certainty. It does not provide proof. It only provides the consensus of experts, based on the organized accumulation and scrutiny of evidence. Hearing “both sides” of an issue makes sense when debating politics in a two-party system, but there’s a problem when that framework is applied to science. When a scientific question is unanswered, there may be three, four, or a dozen competing hypotheses, which are then investigated through research. Or there may be just one generally accepted working hypothesis, but with several important variations or differences in emphasis. When geologists were debating continental drift in the 1940s, Harvard professor Marlin Billings taught his students no less than nineteen different possible explanations for the phenomena that drift theory -- later plate tectonics -- was intended to explain.
Research produces evidence, which in time may settle the question (as it did as continental drift evolved into plate tectonics, which became established geological theory in the early 1970s). After that point, there are no “sides.” There is simply accepted scientific knowledge. There may still be questions that remain unanswered -- to which scientists then turn their attention -- but for the question that has been answered, there is simply the consensus of expert opinion on that particular matter. That is what scientific knowledge is.
Most people don’t understand this. If we read an article in the newspaper presenting two opposing viewpoints, we assume both have validity, and we think it would be wrong to shut one side down. But often one side is represented only by a single “expert” -- or as we saw in our story -- one or two. When it came to global warming, we saw how the views of Seitz, Singer, Nierenberg, and a handful of others were juxtaposed against the collective wisdom of the entire IPCC, an organization that encompasses the views and work of thousands of climate scientists around the globe -- men and women of diverse nationality, temperament, and political persuasion. This leads to another important point: that modern science is a collective enterprise.
For many of us, the word “science” does not actually conjure visions of science; it conjures visions of scientists. We think of the great men of science -- Galileo, Newton, Darwin, Einstein -- and imagine them as heroic individuals, often misunderstood, who had to fight against conventional wisdom or institutions to gain appreciation for their radical new ideas. To be sure, brilliant individuals are an important part of the history of science; men like Newton and Darwin deserve the place in history that they hold. But if you asked a historian of science, When did modern science begin? She would not cite the birth of Galileo or Copernicus. Most likely, she would discuss the origins of scientific institutions.
From its earliest, days, science has been associated with institutions -- the Accademia dei Lincei, founded in 1609, the Royal Society in Britain, founded in 1660, the Académie des Sciences in France, founded in 1666 -- because scholars (savants and natural philosophers as they were variously called before the nineteenth- century invention of the word “scientist”) understood that to create new knowledge they needed a means to test each other’s claims. Medieval learning had largely focused on study of ancient texts -- the preservation of ancient wisdom and the appreciation of texts of revelation -- but later scholars began to feel that the world needed something more. One needed to make room for new knowledge. Once one opened the door to the idea of new knowledge, however, there was no limit to the claims that might be put forth, so one needed a mechanism to vet them. These were the origins of the institutional structures that we now take for granted in contemporary science: journals, conferences, and peer review, so that claims could be reported clearly and subject to rigorous scrutiny.
Science has grown more than exponentially since the 1600s, but the basic idea has remained the same: scientific ideas must be supported by evidence, and subject to acceptance or rejected. The evidence could be experimental or observational; it could be a logical argument or a theoretical proof. But what ever the body of evidence is, both the idea and the evidence used to support it must be judged by a jury of one’s scientific peers. Until a claim passes that judgment -- that peer review -- it is only that, just a claim. What counts as knowledge are the ideas that are accepted by the fellowship of experts (which is why members of these societies are often called “fellows”). Conversely, if the claim is rejected, the honest scientist is expected to accept that judgment, and move on to other things. In science, you don’t get to keep harping on a subject until your opponents just give up in exhaustion.
The he said/she said framework of modern journalism ignores this reality. We think that if someone disagrees, we should give that someone due consideration. We think it’s only fair. What we don’t understand is that in many cases, that person has already received due consideration in the halls of science. When Robert Jastrow and his colleagues first took their claims to the halls of public opinion, rather than to the halls of science, they were stepping outside the institutional protocols that for four hundred years have tested the veracity of scientific claims. Many of the claims of our contrarians had already been vetted in the halls of science and failed to pass the test of peer review. At that point, their claims could not really be considered scientific, and our protagonists should have moved on to other things. In a sense they were poor losers. The umpires had made their call, but our contrarians refused to accept it. Moreover, in many cases these contrarians did not even attempt to have their claims vetted. In fact, many of them had stopped doing scientific research. Our story began in the 1970s, when Fred Seitz was already retired from the Rocke fel ler University and began defending tobacco, although he was a solid- state physicist, not a biologist, oncologist, or physician. The story continued in the 1980s, when Seitz joined forces with Robert Jastrow and William Nierenberg. How much original research on SDI or acid rain or the ozone hole or secondhand smoke or global warming did any of them do? The answer is nearly none. A search of the Web of Science -- an index of peer- reviewed scientific publications maintained by the Institute for Scientific Information -- shows that Frederick Seitz stopped doing original scientific research around 1970. After that he continued to publish here and there, but mostly book reviews, editorials and letters to editors, and a few works on great men in the history of science. Bill Nierenberg and Robert Jastrow similarly published little in the peer- reviewed journals during this period.
Fred Singer has perhaps the most credible claim to have been a working scientist during the course of our story. In the 1950s and ’60s he published a substantial number of articles on physics and geophysics, many in leading journals such as Nature, Physical Review, and the Journal of Geophysical Research. But around 1970, he too shifted, from then on writing a large number of letters and editorials.3 Web of Science lists some of these as articles, but it is at least debatable as to whether most of these constitute original scientific research, such as Singer’s 1992 piece, “Warming Theories Need Warning Labels,” published in the Bulletin of Atomic Scientists (which, not incidentally, contains an illustration of the domino effect -- shades of his anti- Communism). In the 1980s, Singer did a series of articles
for the Wall Street Journal on oil resources, yet he was not a geologist, a petroleum engineer, or a resource economist, and had done little or no peer-reviewed research on the topic.
The fact is that these men were never really experts on the diverse issues to which they turned their attention in their golden years. They were physicists, not epidemiologists, ecologists, atmospheric chemists, or climate modelers. To have been truly expert on all the different topics on which they commented, they would have to have been all of these things: epidemiologist and ecologist, atmospheric chemist and climate modeler. No one in the modern world is all of those things. Modern science is far too specialized for that. It requires a degree of focus and dedication that makes it a daunting task to be an expert in any area of modern science, much less in several of them at once. If nothing else, this should have clued observers in that these men simply could not have been real experts. An all purpose expert is an oxymoron.
Journalists were fooled by these men’s stature, and we are all fooled by the assumption that a smart person is smart about everything: physicists have been consulted on everything from bee colony collapse to spelling reform and the prospects for world peace.6 And, of course, smoking and cancer. But asking a physicist to comment on smoking and cancer is like asking an Air Force captain to comment on the design of a submarine. He might know something about it; then again, he might not. In any case, he’s not an expert.
So what do we do?
We all have to make decisions every day, and we do so in the face of uncertainty. When we buy a car, when we buy a house, when we choose health insurance or save for retirement, we make decisions and we don’t allow uncertainty to paralyze us. But we may rely on people who we think can help us.
Normally, we try to make decisions based on the best information that we can get about the question. Let’s say you need to buy a car. No doubt you will take some test drives, but you’ll also talk to friends, especially those who know something about cars, and maybe read some magazines that evaluate cars, like Consumer Reports or Car and Driver. While you know that magazines make mistakes, and that prices and availability of options can vary, you assume that the information you find is reasonably accurate and realistic. Call it car and driver realism.
The metaphor isn’t quite apt for our discussion though, because in the end buying a car is highly subjective, based to a large extent on questions of taste. I can decide what I think is the right car for me, but there are no experiments I can do or observations I can make that will settle the question for others. There is, in the end, no truth of the matter. So consider a different example.
One of the largest financial decisions most of us make in our lives is the decision to buy a home. When we do, we consider numerous factors: the size and location; access to work, shopping, and recreation; safety and security; the quality of local schools; and of course the price. The process of deciding to make an offer can be wrenching, involving, like the car, a host of subjective factors, but with far more at stake. Once we’ve made the decision to make an offer, however, we need to do one more thing -- something to which most of us give really rather little thought, considering how much is at stake.
We do a title search. Or rather, we hire someone to do a title search. We need to know that the title on the property actually belongs to the person who is selling it, and there are no outstanding claims or liens to stand in our way of ownership. If the person we hire to do the search is incompetent or dishonest, we could end up in a financial disaster. Yet we do trust the title search. Why? The short answer is because we don’t have much choice. Someone has to do the title search, and we do not have the expertise to do it ourselves. We trust someone who is trained, licensed, and experienced to do it for us.
The sociologist Michael Smithson has pointed out that all social relations are trust relations. We trust other people to do things for us that we can’t or don’t want to do ourselves. Even legal contracts involve a degree of trust, because the person involved could always flee to Venezuela. If we don’t trust others or don’t want to relinquish control, we can often do things for ourselves. We can cook our own food, clean our own homes, do our own taxes, wash our own cars, even school our own children. But we cannot do our own science.
So it comes to this: we must trust our scientific experts on matters of science, because there isn’t a workable alternative. And because scientists are not (in most cases) licensed, we need to pay attention to who the experts actually are -- by asking questions about their credentials, their past and current research, the venues in which they are subjecting their claims to scrutiny, and the sources of financial support they are receiving.
If the scientific community has been asked to judge a matter—(as the National Academy of Sciences routinely is) -- or if they have self-organized to do so (as in the Ozone Trends Panel or the IPCC), then it makes sense to take the results of their investigations very seriously. These are the title searches of modern science and public policy. It does not make sense to
dismiss them just because some person, somewhere, doesn’t agree. And it especially does not make sense to dismiss the consensus of experts if the dissenter is superannuated, disgruntled, a habitual contrarian, or in the pay of a group with an obvious ideological agenda or vested political or economic interest. Or in some cases, all of the above. Sensible decision making involves acting on the information we have, even while accepting that it may well be imperfect and our decisions may need to be revisited and revised in light of new information. For even if modern science does not give us certainty, it does have a robust track record. We have sent men to the moon, cured diseases, figured out the internal composition of the Earth, invented new materials, and built machines to do much of our work for us— all on the basis of modern scientific knowledge. While these practical accomplishments do not prove that our scientific knowledge is true, they do suggest that modern science gives us a pretty decent basis for action.
In the early 1960s, one of the world’s leading epidemiologists, initially skeptical of the idea that tobacco was deadly, came around to accepting that the weight of evidence showed that it was. In response to those who still doubted it and insisted that more data were needed, he replied, All scientific work is incomplete— whether it be observational or experimental. All scientific work is liable to be upset or modified by advancing knowledge. That does not confer upon us a freedom to ignore the knowledge we already have, to postpone action that it appears to demand at a given time. Who knows, asks Robert Browning, but the world may end to night? True, but on available evidence most of us make ready to commute on the 8:30 next day.
Don’t get us wrong. Scientists have no special purchase on moral or ethical decisions; a climate scientist is no more qualified to comment on health care reform than a physicist is to judge the causes of bee colony collapse. The very features that create expertise in a specialized domain lead to ignorance in many others. In some cases lay people -- farmers, fishermen, patients, indigenous peoples -- may have relevant experiences that scientists can learn from. Indeed, in recent years, scientists have begun to recognize this: the Arctic Climate Impact Assessment includes observations gathered from local indigenous groups.
So our trust needs to be circumscribed, and focused. It needs to be very particular. Blind trust will get us into as least as much trouble as no trust at all. But without some degree of trust in our designated experts -- the men and women who havededicated their lives to sorting out tough questions about the natural world we live in -- we are paralyzed, in effect not knowing whether to make ready for the morning commute or not. We are left, as de Tocqueville recognized two hundred years ago, with nothing but confused clamor. Or as Shakespeare suggested centuries ago, life is reduced to “a tale told by an idiot, full of sound and fury, signifying nothing.” C. P. Snow once argued that foolish faith in authority is the enemy of truth. But so is a foolish cynicism.
In writing this book, we have plowed through hundreds of thousands of pages of documents. As historians during the course of our careers we have plowed through millions more. Often we find that, in the end, it is best to let the witnesses to events speak for themselves. So we close with the comments of S. J. Green, director of research for British American Tobacco, who decided, finally, that what his industry had done was wrong, not just morally, but also intellectually: “A demand for scientific proof is always a formula for inaction and delay, and usually the first reaction of the guilty. The proper basis for such decisions is, of course, quite simply that which is reasonable in the circumstances.” Or as Bill Nierenberg put it in a candid moment, “You just know in your heart that you can’t throw 25 million tons a year of sulfates into the Northeast and not expect some . . . consequences." We agree.
