Advertisement
Guest User

Untitled

a guest
Nov 22nd, 2019
173
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 15.97 KB | None | 0 0
  1. FEATURES
  2.  
  3. BEHAVIOR & SOCIETY
  4.  
  5. Tough Calls
  6.  
  7. How we make decisions in the face of incomplete knowledge and uncertainty
  8.  
  9. By Baruch Fischhoff
  10.  
  11.  
  12. Credit: Wesley Allsbrook
  13.  
  14. IN BRIEF
  15.  
  16. When people assess novel risks, they rely on mental models derived from previous experience, which may not be applicable.
  17.  
  18. Asking people how they form such assessments can reveal misleading preconceptions.
  19.  
  20. Experts can also test messages about risk to ensure the public understands them clearly.
  21.  
  22. Psychologists study how humans make decisions by giving people “toy” problems. In one study, for example, my colleagues and I described to subjects a hypothetical disease with two strains. Then we asked, “Which would you rather have? A vaccine that completely protects you against one strain or a vaccine that gives you 50 percent protection against both strains?” Most people chose the first vaccine. We inferred that they were swayed by the phrase about complete protection, even though both shots gave the same overall chance of getting sick.
  23.  
  24. But we live in a world with real problems, not just toy ones—situations that sometimes require people to make life-and-death decisions in the face of incomplete or uncertain knowledge. Years ago, after I had begun to investigate decision-making with my colleagues Paul Slovic and the late Sarah Lichtenstein, both at the firm Decision Research in Eugene, Ore., we started getting calls about non-toy issues—calls from leaders in industries that produced nuclear power or genetically modified organisms (GMOs). The gist was: “We've got a wonderful technology, but people don't like it. Even worse, they don't like us. Some even think that we're evil. You're psychologists. Do something.”
  25.  
  26. We did, although it probably wasn't what these company officials wanted. Instead of trying to change people's minds, we set about learning how they really thought about these technologies. To that end, we asked them questions designed to reveal how they assessed risks. The answers helped us understand why people form beliefs about divisive issues such as nuclear energy—and today, climate change—when they do not have all the facts.
  27.  
  28. INTIMATIONS OF MORTALITY
  29.  
  30. To start off, we wanted to figure out how well the general public understands the risks they face in everyday life. We asked groups of laypeople to estimate the annual death toll from causes such as drowning, emphysema and homicide and then compared their estimates with scientific ones. Based on previous research, we expected that people would make generally accurate predictions but that they would overestimate deaths from causes that get splashy or frequent headlines—murders, tornadoes—and underestimate deaths from “quiet killers,” such as stroke and asthma, that do not make big news as often.
  31.  
  32. Overall, our predictions fared well. People overestimated highly reported causes of death and underestimated ones that received less attention. Images of terror attacks, for example, might explain why people who watch more television news worry more about terrorism than individuals who rarely watch. But one puzzling result emerged when we probed these beliefs. People who were strongly opposed to nuclear power believed that it had a very low annual death toll. Why, then, would they be against it? The apparent paradox made us wonder if by asking them to predict average annual death tolls, we had defined risk too narrowly. So, in a new set of questions we asked what risk really meant to people. When we did, we found that those opposed to nuclear power thought the technology had a greater potential to cause widespread catastrophes. That pattern held true for other technologies as well.
  33.  
  34. To find out whether knowing more about a technology changed this pattern, we asked technical experts the same questions. The experts generally agreed with laypeople about nuclear power's death toll for a typical year: low. But when they defined risk themselves, on a broader time frame, they saw less potential for problems. The general public, unlike the experts, emphasized what could happen in a very bad year. The public and the experts were talking past each other and focusing on different parts of reality.
  35.  
  36. UNDERSTANDING RISK
  37.  
  38. Did experts always have an accurate understanding of the probabilities for disaster? Experts analyze risks by breaking complex problems into more knowable parts. With nuclear power, the parts might include the performance of valves, control panels, evacuation schemes and cybersecurity defenses. With GMO crops, the parts might include effects on human health, soil chemistry and insect species.
  39.  
  40. The quality and accuracy of a risk analysis depend on the strength of the science used to assess each part. Science is fairly strong for nuclear power and GMOs. For new technologies such as self-driving vehicles, it is a different story. The components of risk could be the probability of the vehicle laser-light sensors “seeing” a pedestrian, the likelihood of a pedestrian acting predictably, and the chances of a driver taking control at the exact moment when a pedestrian is unseen or unpredictable. The physics of pulsed laser-light sensors is well understood, but how they perform in snow and gloom is not. Research on how pedestrians interact with autonomous vehicles barely exists. And studies of drivers predict that they cannot stay vigilant enough to handle infrequent emergencies.
  41.  
  42. When scientific understanding is incomplete, risk analysis shifts from reliance on established facts to expert judgment. Studies of those judgments find that they are often quite good—but only when experts get good feedback. For example, meteorologists routinely compare their probability-of-precipitation forecasts with the rain gauge at their station. Given that clear, prompt feedback, when forecasters say that there is a 70 percent chance of rain, it rains about 70 percent of the time. With new technologies such as the self-driving car or gene editing, however, feedback will be a long time coming. Until it does, we will be unsure—and the experts themselves will not know—how accurate their risk estimates really are.
  43.  
  44. THE SCIENCE OF CLIMATE SCIENCE
  45.  
  46. Expert judgment, which is dependent on good feedback, comes into play when one is predicting the costs and benefits of attempts to slow climate change or to adapt to it. Climate analyses combine the judgments of experts from many research areas, including obvious ones, such as atmospheric chemistry and oceanography, and less obvious ones, such as botany, archaeology and glaciology. In complex climate analyses, these expert judgments reflect great knowedge driven by evidence-based feedback. But some aspects still remain uncertain.
  47.  
  48.  
  49.  
  50.  
  51. Credit: Jen Christiansen; Source: “Individuals with Greater Science Literacy and Education Have More Polarized Beliefs on Controversial Science Topics,” by Caitlin Drummond and Baruch Fischhoff, in Proceedings of the National Academy of Sciences USA, Vol. 114, No. 36; September 5, 2017
  52.  
  53. My first encounter with these analyses was in 1979, as part of a project planning the next 20 years of climate research. Sponsored by the Department of Energy, the project had five working groups. One dealt with the oceans and polar regions, a second with the managed biosphere, a third with the less managed biosphere, and a fourth with economics and geopolitics. The fifth group, which I joined, dealt with social and institutional responses to the threat.
  54.  
  55. Even then, 40 years ago, the evidence was strong enough to reveal the enormous gamble being taken with our planet. Our overall report, summarizing all five groups, concluded that “the probable outcome is beyond human experience.”
  56.  
  57. THINKING OF THE UNTHINKABLE
  58.  
  59. How, then, can researchers in this area fulfill their duty to inform people about accurate ways to think about events and choices that are beyond their experience? Scientists can, in fact, accomplish this if they follow two basic lessons from studies of decision-making.
  60.  
  61.  
  62.  
  63.  
  64. Credit: Jen Christiansen; Source: Risk: A Very Short Introduction, by Baruch Fischhoff and John Kadvany. Oxford University Press, 2011; Redrafted from “How Safe Is Safe Enough? A Psychometric Study of Attitudes Towards Technological Risks and Benefits,” by Baruch Fischoff et al., in Policy Sciences, Vol. 9, No. 2; April 1978
  65.  
  66. LESSON 1: The facts of climate science will not speak for themselves. The science needs to be translated into terms that are relevant to people's decisions about their lives, their communities and their society. While most scientists are experienced communicators in a classroom, out in the world they may not get feedback on how clear or relevant their messages are.
  67.  
  68. Addressing this feedback problem is straightforward: test messages before sending them. One can learn a lot simply by asking people to read and paraphrase a message. When communication researchers have asked for such rephrasing about weather forecasts, for example, they have found that some are confused by the statement that there is a “70 percent chance of rain.” The problem is with the words, not the number. Does the forecast mean it will rain 70 percent of the time? Over 70 percent of the area? Or there is a 70 percent chance of at least 0.01 inch of rain at the weather station? The last interpretation is the correct answer.
  69.  
  70. Many studies have found that numbers, such as 70 percent, generally communicate much better than “verbal quantifiers,” such as “likely,” “some” or “often.” One classic case from the 1950s involves a U.S. National Intelligence Estimate that said that “an attack on Yugoslavia in 1951 should be considered a serious possibility.” When asked what probability they had in mind, the analysts who signed the document gave a wide range of numbers, from 20 to 80 percent. (The Soviets did not invade.)
  71.  
  72. Sometimes people want to know more than the probability of rain or war when they make decisions. They want to understand the processes that lead to those probabilities: how things work. Studies have found that some critical aspects of climate change research are not intuitive for many people, such as how scientists can bicker yet still agree about the threat of climate change or how carbon dioxide is different from other pollutants. (It stays in the atmosphere longer.) People may reject the research results unless scientists tell them more about how they were derived.
  73.  
  74. LESSON 2: People who agree on the facts can still disagree on what to do about them. A solution that seems sound to some can seem too costly or unfair to others.
  75.  
  76. For example, people who like plans for carbon capture and sequestration, because it keeps carbon dioxide out of the air, might oppose using it on coal-fired power plants. They fear an indirect consequence: cleaner coal may make mountaintop-removal mining more acceptable. Those who know what cap-and-trade schemes are meant to do—create incentives for reducing emissions—might still believe that they will benefit banks more than the environment.
  77.  
  78. These examples show why two-way communication is so important in these situations. We need to learn what is on others' minds and make them feel like partners in decision-making. Sometimes that communication will reveal misunderstandings that research can reduce. Or it may reveal solutions that make more people happy. One example is British Columbia's revenue-neutral carbon tax, whose revenues make other taxes lower; it has also produced broad enough political support to weather several changes of government since 2008. Sometimes, of course, better two-way communication will reveal fundamental disagreements, and in those cases action is a matter for the courts, streets and ballot boxes.
  79.  
  80. MORE THAN SCIENCE
  81.  
  82. These lessons about how facts are communicated and interpreted are important because climate-related decisions are not always based on what research says or shows. For some individuals, scientific evidence or economic impacts are less important than what certain decisions reveal about their beliefs. These people ask how their choice will affect the way others think about them, as well as how they think about themselves.
  83.  
  84. For instance, there are people who forgo energy conservation measures but not because they are against conservation. They just do not want to be perceived as eco-freaks. Others who conserve do it more as a symbolic gesture and not based on a belief that it makes a real difference. Using surveys, researchers at Yale Climate Connections have identified what they call Six Americas in terms of attitudes, ranging from alarmed to dismissive. People at those two extremes are the ones who are most likely to adopt measures to conserve energy. The alarmed group's motives are what you might expect. Those in the dismissive group, though, may see no threat from climate change but also have noted they can save money by reducing their energy consumption.
  85.  
  86. Knowing the science does not necessarily mean agreeing with the science. The Yale study is one of several that found greater polarization among different political groups as people in the groups gained knowledge of some science-related issues. In ongoing research, Caitlin Drummond, currently a postdoctoral fellow at the University of Michigan's Erb Institute, and I have uncovered a few hints that might account for this phenomenon. One possible explanation is that more knowledgeable people are more likely to know the position of their affiliated political group on an issue and align themselves with it. A second possibility is that they feel more confident about arguing the issues. A third, related explanation is that they are more likely to see, and seize, the chance to express themselves than those who do not know as much.
  87.  
  88.  
  89. Young activists gathered in New York City in May to demand immediate action on climate change. Credit: Erik McGregor Getty Images
  90.  
  91. WHEN DECISIONS MATTER MOST
  92.  
  93. Although decision science researchers still have much to learn, their overall message about ways to deal with uncertain, high-stakes situations is optimistic. When scientists communicate poorly, it often indicates that they have fallen prey to a natural human tendency to exaggerate how well others understand them. When laypeople make mistakes, it often reflects their reliance on mental models that have served them well in other situations but that are not accurate in current circumstances. When people disagree about what decisions to make, it is often because they have different goals rather than different facts.
  94.  
  95. In each case, the research points to ways to help people better understand one another and themselves. Communication studies can help scientists create clearer messages. And decision science can help the public to refine their mental models to interpret new phenomena. By reducing miscommunication and focusing on legitimate disagreements, decision researchers can help society have fewer conflicts and make dealing with the ones that remain easier for us all.
  96.  
  97. MORE TO EXPLORE
  98.  
  99. Risk: A Very Short Introduction. B. Fischhoff and J. Kadvany. Oxford University Press, 2011.
  100.  
  101. The Science of Science Communication. Special issue. Proceedings of the National Academy of Sciences USA, Vol. 111, Supplement 3; August 20, 2013. http://www.pnas.org/content/110/Supplement_3
  102.  
  103. The Science of Science Communication II. Special issue. Proceedings of the National Academy of Sciences USA, Vol. 111, Supplement 4; September 16, 2014. http://www.pnas.org/content/111/Supplement_4
  104.  
  105. The Science of Science Communication III: Inspiring Novel Collaborations and Building Capacity: Proceedings of a Colloquium. National Academy of Sciences. National Academies Press, 2018.
  106.  
  107. FROM OUR ARCHIVES
  108.  
  109. Risk Analysis and Management. M. Granger Morgan; July 1993.
  110.  
  111. ABOUT THE AUTHOR(S)
  112.  
  113.  
  114.  
  115.  
  116.  
  117.  
  118. Baruch FischhoffPsychologist Baruch Fischhoff is Howard Heinz University Professor in the department of engineering and public policy and the Institute for Politics and Strategy at Carnegie Mellon University. He is a member of the National Academy of Sciences and National Academy of Medicine and past president of the Society for Risk Analysis.
  119.  
  120. Nick Higgins
  121.  
  122. TOP
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement