Risk Thermometers
I've seen a number of references recently (can't remember exactly where) to John Adams' "risk thermometer" theory. In essence, the risk thermometer theory states that for any activity, a person has a certain level of acceptable risk. If something is done to make the activity safer, the person will adust their behavior to take more risks, bringing the overall risk level back up to the maximum acceptable level. He famously demonstrated this in a study of seatbelts, which failed to save lives because belted drivers simply drove faster and more recklessly since they knew the seatbelts would help keep them safe. The risk thermometer should be distinguished from the phenomenon of risk tradeoffs, in which addressing one risk produces another (e.g. chlorinating water to kill bacteria creates a risk of chloroform poisoning).
The risk thermometer idea is frequently cited in order to show the futility of safety policies. However, there are several qualifications that must be borne in mind (most of them touched on by Adams himself):
1. Adams summarizes the theory as "the potential safety benefit gets consumed as a performance benefit" (emphasis added). In other words, safety measures don't do nothing. They allow us to reap the benefits of taking more risks. So seatbelts, for example, may not save lives, but they let us get where we're going faster and with less anxiety. Of course, it's less politically efficacious to advocate a policy that increases benefits as compared to one that's claimed to save lives, except in cases where the benefit is allowing people to do things we think of as normal -- e.g. fixing the ozone hole so that Australians can sunbathe without fear.
2. The risk thermometer's operation depends on the risk victim being able to recognize the safety levels before and after the risk-reduction policy is implemented. Seatbelts are a good illustration of the risk thermometer because people have a fairly good idea of the risks entailed by driving various speeds with or without a seatbelt. But other risks, like toxic contamination, are much harder for laypeople to judge precisely, and so they're susceptible to behavioral overreaction or underreaction to changes in the riskiness level.
3. The risk thermometer's operation also depends on the risk victim being able to adjust the risk-creating behavior. This possibility for adjustment can be absent in two ways. The first is when there's no performance benefit to be had by riskier behavior. If the EPA cleans some of the lead out of the soil in the vacant lot next door, that's a pure reduction in my risk of lead poisoning. It would make no sense for me to go breathe in a bunch of extra lower-lead dust (thus offsetting EPA's efforts), because I wouldn't gain anything from it. My level of dust-inhalation is dominated by the unpleasantness of inhaling dust, with worries about lead being a very minor aspect of my decision-making. The second type of situation is when the victim is not the risk-taker. Adams discusses the example of pedestrians, who are put at greater risk when seatbelted drivers go whizzing by (though the example is imperfect because pedestrians can adjust their own risk thermometers by avoiding walking by roads).
4. The risk thermometer only applies within a single activity. Much to the dismay of economistic thinkers, people tend to compartmentalize risks, so that they don't make conscious tradeoffs between risks in different arenas. Thus, decreasing the risk of skin cancer won't lead people to make an offsetting increase in reckless driving. This compartmentalization makes thermometer-breaking of the type discussed in point #3 more common.
An interesting implication of all of these points is to call into question the common objection that reductions in environmental risks (such as contamination cleanups) is an inefficient way to promote safety. The typical claim is that for the money that's put into something like a Superfund cleanup, we could save more lives through traffic safety programs. Yet traffic safety is the paradigm case of a risk thermometer effect, whereas toxic cleanups have features that limit the thermometer -- the extent of the risk is unclear to the victims, and there is either little performance benefit to be had by changing behavior or else the benefit is something like "being able to let your kids play outside" that we consider to be of much more fundamental importance than "being able to drive faster."
The risk thermometer idea is frequently cited in order to show the futility of safety policies. However, there are several qualifications that must be borne in mind (most of them touched on by Adams himself):
1. Adams summarizes the theory as "the potential safety benefit gets consumed as a performance benefit" (emphasis added). In other words, safety measures don't do nothing. They allow us to reap the benefits of taking more risks. So seatbelts, for example, may not save lives, but they let us get where we're going faster and with less anxiety. Of course, it's less politically efficacious to advocate a policy that increases benefits as compared to one that's claimed to save lives, except in cases where the benefit is allowing people to do things we think of as normal -- e.g. fixing the ozone hole so that Australians can sunbathe without fear.
2. The risk thermometer's operation depends on the risk victim being able to recognize the safety levels before and after the risk-reduction policy is implemented. Seatbelts are a good illustration of the risk thermometer because people have a fairly good idea of the risks entailed by driving various speeds with or without a seatbelt. But other risks, like toxic contamination, are much harder for laypeople to judge precisely, and so they're susceptible to behavioral overreaction or underreaction to changes in the riskiness level.
3. The risk thermometer's operation also depends on the risk victim being able to adjust the risk-creating behavior. This possibility for adjustment can be absent in two ways. The first is when there's no performance benefit to be had by riskier behavior. If the EPA cleans some of the lead out of the soil in the vacant lot next door, that's a pure reduction in my risk of lead poisoning. It would make no sense for me to go breathe in a bunch of extra lower-lead dust (thus offsetting EPA's efforts), because I wouldn't gain anything from it. My level of dust-inhalation is dominated by the unpleasantness of inhaling dust, with worries about lead being a very minor aspect of my decision-making. The second type of situation is when the victim is not the risk-taker. Adams discusses the example of pedestrians, who are put at greater risk when seatbelted drivers go whizzing by (though the example is imperfect because pedestrians can adjust their own risk thermometers by avoiding walking by roads).
4. The risk thermometer only applies within a single activity. Much to the dismay of economistic thinkers, people tend to compartmentalize risks, so that they don't make conscious tradeoffs between risks in different arenas. Thus, decreasing the risk of skin cancer won't lead people to make an offsetting increase in reckless driving. This compartmentalization makes thermometer-breaking of the type discussed in point #3 more common.
An interesting implication of all of these points is to call into question the common objection that reductions in environmental risks (such as contamination cleanups) is an inefficient way to promote safety. The typical claim is that for the money that's put into something like a Superfund cleanup, we could save more lives through traffic safety programs. Yet traffic safety is the paradigm case of a risk thermometer effect, whereas toxic cleanups have features that limit the thermometer -- the extent of the risk is unclear to the victims, and there is either little performance benefit to be had by changing behavior or else the benefit is something like "being able to let your kids play outside" that we consider to be of much more fundamental importance than "being able to drive faster."
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home