June 13, 2011

Science policy communication failure costs lives



A recent issue of Science magazine features a news article about seven scientists in Italy who are facing manslaughter charges for not predicting the danger of an earthquake that killed 308 people. The scientists were part of a risk committee of earth scientists who testified that incipient tremors were not evidence of an oncoming earthquake in 2009. According to Science, “They agreed that no one can currently predict precisely when, where, and with what strength an earthquake will strike” (3 June 2011, p. 1135). These are all accurate statements, from a scientific point of view. But the problem lies in translating these statements for decision-makers and stakeholders, which includes people in the town of L’Aquila, Italy.

The lead scientist “maintained that he and his scientific colleagues had a responsibility to provide the ‘best scientific findings’ and that it is ‘up to politicians’ to translate the scientific findings into decisions” (Science, 3 June 2011, p. 1136). This is the linear model of science policy at its worst, literally costing lives because of the mismatch of science and policy risk management paradigms, or as Cash et al. (2006) describe, the “loading dock” model of simply delivering scientific results and hoping that the public sphere will pick them up and use them. To the scientists, risk and uncertainty are quantifiable metrics that are difficult to translate into social action. To decision-makers and the public, risk is a socially mediated, multidimensional value that depends on more than just probabilities. Uncertainty has been a traditional sticking point in earth science and policy topics such as climate change. However, Cash et al. (2006) demonstrate how bringing together scientists and decision-makers from the beginning helped improve the utility of climate models for end-users. They write, “Scientists began to understand that managers were comfortable making decisions under uncertainty, and managers began to understand the concerns scientists had about making scientific claims in the face of uncertainty” (Cash et al., 2006, p. 482). This was clearly not the case with the Italian scientists and decision-makers.

At first glance, this case provokes outcry from scientists afraid of losing the public’s trust and being put on trial, literally. While it may be presumptuous to actually put scientists on trial for a failure to dialogue with decision-makers, this puts into question the implicit “social contract of science” that has justified basic scientific research since the end of WWII. Sheila Jasanoff told a group of ASU graduate students last spring that, “Scientists have become arrogant, and have not explained to the people why they deserve support... The Enlightenment was not a historical event. It is a process, a mission, a continuous duty to explain yourself” (personal communication, 11 February 2011; not an exact quote, but very close). Jasanoff lays out an alternative claim to the linear model of science policy that she calls “technologies of humility” (2003). In contrast to calls for “more science” to reduce uncertainty, Jasanoff writes that, “what is lacking is not just knowledge to fill the gaps, but also processes and methods to elicit what the public wants, and to use what is already known” (2006, p. 240). The abstract of her paper states, “governments should reconsider existing relations among decision-makers, experts, and citizens in the management of technology. Policy-makers need a set of ‘technologies of humility’ for systematically assessing the unknown and the uncertain” (Jasanoff, 2003, p. 223). Jasanoff and other Science and Society scholars have been writing about the failures of the linear science policy model in predicting risk since the 1980s, when the risk-management paradigm began to crumble in the wake of seemingly “unpredictable” human-technology-based disasters like Chernobyl. Today we face critical policy issues from climate change to toxic chemicals that fundamentally depend upon and understanding of environmental science, but just understanding the science is not enough. We need a new model of science policy that incorporates the needs of decision-makers and stakeholders from the start, not after it’s too late.
Sources:
Cartlidge, E. (3 June 2011). “Quake Experts to Be Tried For Manslaughter.” Science, 332, p. 1135-1136.
Cash, D.W., Borck, J.C., & Patt, A.G. (2006). “Countering the Loading-Dock Approach to Linking Science and Decision Making.” Science, Technology, & Human Values, 31, p. 465-494.http://sciencepolicy.colorado.edu/students/envs_5100/Cashetal2006.pdf
Jasanoff, Sheila (2003). “Technologies of Humility: Citizen Participation in Governing Science.” Minerva, 41. 223-244.http://sciencepolicy.colorado.edu/students/envs_5100/jasanoff2003.pdf
Further reading:
Sarewitz, D., Pielke, Jr., R.A., & Byerly, R. (editors) (2000). Prediction: Science, Decision Making and the Future of Nature. Washington, DC: Island Press. Available at: Google books, Amazon.com

4 comments:

  1. Truly terrifying, nearly medieval in its view of science as a force that can turn away divine provenance. Even if scientists had accurately predicted the earthquake, what could the city have done to prepare?

    There should be ethical standards for "willfully bad advice," but the Italian courts are not going to find them this way.

    ReplyDelete
  2. Thanks for the comment, Michael. The lead scientist actually blamed bad construction of buildings for the reason that so many people died and the damage that was caused. This seems to mesh with Pielke Jr. & Sarewitz's argument about climate vulnerability as related to socioeconomic and infrastructural factors rather than an increase in natural disaster intensity.

    ReplyDelete
  3. It's hard not to sympathize with the scientists. After all, they certainly didn't cause the earthquake. On the other hand, the court is clearly trying to grapple with the reality that knowledge is not naively separate from politics and people who claim knowledge and expertise in policy contexts should have some form of accountability attached to them. Any politician who that badly interpreted the evidence for her constituents would likely be fired at the next election. A member of the cabinet would likely be just fired. A company that misled the public that badly about the toxicity of a product they were selling would certainly be subject to claims of negligence or fraud and associated fines. If they lied deliberately, they would probably be subject to criminal liability as well, and perhaps manslaughter charges.

    A man, sick, driving a car, reached into the glove compartment for a kleenex, lost control of the vehicle, which went up on the sidewalk and hit a woman, killing her. We can feel sympathy for him, and certainly he felt tremendous morose. The court convicted him of manslaughter, and he spent a year in prison.

    In all of these non-science cases, the issue is that people wield power and other people rely on them to wield that power responsibly. If they do not, and the result hurts people, they are held accountable.

    By that logic, if we assume that scientists wield power and other people rely on them to do so responsibly. If they do not, and the results hurt people, should they not also be held accountable in some fashion? And, if the answer is yes, then what standard do we use for doing so and for determining punishment?

    ReplyDelete
  4. Clark, thank you for the thoughtful comments! I have still been grappling with this, and I think this case really highlights that science is a human endeavor.

    ReplyDelete

Note: Only a member of this blog may post a comment.