Priceless: On Knowing the Price of Everything and the Value of Nothing
By Frank Ackerman and Lisa Heinzerling
(New Press, 277 pp., $25.95)
In protecting the environment, how do America and Europe differ? The standard account is this: Europe follows the precautionary principle; America follows cost-benefit analysis.
According to the precautionary principle, it is better to be safe than sorry. Aggressive regulation is justified even in the face of scientific uncertainty--even if it is not yet clear that environmental risks are serious. According to cost-benefit analysis, regulation should be undertaken not on the basis of speculation, but only if it is justified by a careful quantitative assessment of both the costs and the benefits of regulatory action. The two approaches lead in radically different directions. What should national governments do about the genetic modification of food? Many Europeans argue that the consequences of genetic modification are uncertain and that real harm is possible--and hence that stringent regulation is readily justified. Many Americans respond that the likely benefits of genetic modification are far greater than the likely harms--and that stringent regulation is unsupportable. Or consider global warming. Many European leaders have argued in favor of precautions, even extremely expensive ones, simply to reduce the risk of catastrophe. But under President Bush, American officials have called for continuing research on the costs and the benefits of higher temperatures.
European and American reality is not quite so simple, of course. Europeans are hardly oblivious to costs and benefits; under the Kyoto Protocol, they would reduce the emission of greenhouse gases by a certain percentage below the levels of 1990, rather than below the much lower levels of 1940. And the precautionary principle is not exactly absent from American practice. Under the Clean Air Act, the Environmental Protection Agency is required to build a "margin of safety" into air-quality standards. In the matter of terrorist threats, the United States has certainly embraced a version of the precautionary principle, concluding that we should spend a great deal to reduce risks that cannot be established with certainty. (Bush's doctrine of "preemptive war" can be understood as a precautionary principle.) But in the context of environmental protection, the central tendencies are clear. At the same time that European regulators have been increasingly committed to the idea of precaution, their American counterparts have been growing more insistent on the need for cost-benefit balancing.
John Graham, Bush's regulatory czar, is one of the nation's most prominent advocates of cost-benefit analysis, and in the Bush administration he has done a great deal to promote its use within the federal government. Graham has also offered some cautionary words about the precautionary principle, suggesting that it might lead in undesirable directions. The underlying debates have played a large role in many important controversies, extending, for example, into discussions of mad cow disease, ephedra, clean air, abandoned hazardous waste dumps, arsenic in drinking water, and even national security. And to the dismay of many environmentalists, cost-benefit balancing has become a major part of America's decisions involving health, safety, and the environment.
Frank Ackerman and Lisa Heinzerling deplore cost-benefit analysis. They think that it is a form of pseudo-science, with the pernicious effect of blinding us to the real values at stake. Human lives are priceless, and deaths are not mere "costs." In their view, cost-benefit analysis is morally obtuse, a recipe for capitulation to powerful industries and ultimately for deregulation. Ackerman and Heinzerling want to replace cost-benefit analysis with the precautionary principle, which, in their hands, is "a more holistic analysis" that argues for regulation in the face of scientific uncertainty and that tries to ensure "fairness in the treatment of the current and future generations."
Why are some people so enthusiastic about cost-benefit balancing? In the United States, a part of the answer lies in widely publicized studies that seem to show a high level of inefficiency in modern regulation. According to such studies, regulations are wildly and even comically inconsistent. Sometimes we spend $100,000 (or less) to save a human life. Sometimes we spend tens or even hundreds of millions. Cost-benefit enthusiasts ask: shouldn't we be devoting our resources to serious health problems rather than trivial ones? If we can spend $10 million to save one thousand lives, shouldn't we do that rather than wasting the money on a similarly priced program that saves only one or two people?
Ackerman and Heinzerling respond that the attack on the current system is based on misleading studies, on "urban legends." When we look carefully at the system, we find that few regulations really impose huge costs for trivial benefits. True, some regulations do not prevent many deaths, but they do prevent serious (non-fatal) harm to human health, and also to ecosystems. The resulting benefits should not be disparaged. Ackerman and Heinzerling add that the key studies find low benefits partly because they greatly "discount" future gains to life and health. Everyone agrees that a dollar today is worth a lot more than the same dollar will be worth in twenty years; economists use a standard "discount rate" (about 7 percent annually) to convert future dollars into current equivalents. In calculating the benefits of regulation, they use the same discount rate for mortalities prevented. Ackerman and Heinzerling contend that this approach is immoral--that it discounts future people, and hence wrongly shrinks the value of regulations that will save people in the future.
Suppose that their arguments are right--that existing regulations do not require huge investments for trivial benefits. Regulators still might want to use cost-benefit analysis to improve their current decisions. Ackerman and Heinzerling complain that to do this they will have to produce a dollar value for a human life--and any such effort will be arbitrary, offensive, or worse. The EPA has recently valued a life at $6.1 million, so that twenty human lives are "worth" $122 million. Where does the $6.1 million figure come from? The answer lies in studies of labor markets conducted by a number of economists, most prominently W. Kip Viscusi. These studies show, more or less, that if American workers are asked to face annual accident risks of one in 100,000, their annual salaries are (on average) $61 higher as a result. A simple exercise in multiplication--100,000 times $61--produces a value of $6.1 million for each life. There are now numerous studies of how much consumers and workers are willing to pay to reduce risks, and agencies build on these studies in converting human lives into dollar equivalents. In taking this approach, regulators are not really "valuing life." What they are doing is assigning a dollar value to the elimination of risks--saying, for example, that it is worthwhile to spend $60 to get rid of a mortality risk of one in 100,000 from arsenic in drinking water.
Ackerman and Heinzerling mount a barrage of arguments against this way of proceeding. They contend that workers often have little information about the risks that they face, and hence they cannot be charged with consciously trading hazards against dollars. And even when workers are informed, they may have few options and hence little choice. If they accept a job with significant hazards, it is not because they are really free to choose. This point is strengthened by strong evidence that unionized workers are paid a lot to assume mortality risks--and hence some economists find that the value of a unionized worker's life is more than double that of a nonunionized worker. Note, too, that for incurring workplace hazards, white workers get far more than African American workers do--a finding that seems to cast doubt on government's use of labor markets to produce a value for life.
Ackerman and Heinzerling add that the pertinent studies ask only how much individuals care about risks to themselves. They ignore the fact that we often value the lives of others, too. I might be willing to pay just $60 to eliminate a risk of one in 100,000 that I face, but I might be willing to pay much more than that to eliminate that risk from my child's life, and substantial amounts to help reduce the risks of my friends. Altruism is ignored in the current calculations.
In one of their most intriguing discussions, Ackerman and Heinzerling insist that statistically equivalent risks should not be treated the same, because people's valuations of risk depend not only on the probability of harm but also on context. About three thousand people died from the terrorist attacks of September 11--a much smaller number than die each year from suicide (30,500), motor vehicle accidents (43,500), and emphysema (17,500). To say the least, the nation's reaction to the terrorist attacks was not based on simple numerical comparisons: "Most of us, when thinking about and responding to risks to life and health, care about more than numerical probabilities of harm." Drawing on important work by the psychologist Paul Slovic, Ackerman and Heinzerling emphasize that the risk judgments of ordinary people diverge from the risk judgments of experts--not because ordinary people are stupid or confused, but because they have a different framework for evaluating risks. While experts focus on the number of deaths at stake, most of us are especially averse to risks that are unfamiliar, uncontrollable, involuntary, irreversible, man-made, or catastrophic.
Most people are not greatly troubled by the risks associated with X-rays, partly because they are voluntarily incurred. The risks of terrorism, by contrast, are especially alarming because individuals cannot easily control them. And when a risk is faced by an identifiable community--say, when landfills with toxic chemicals are located in largely poor areas--the public is especially likely to object. Ackerman and Heinzerling complain that cost-benefit analysis disregards important qualitative differences among social risks. It also "tends to ignore, and therefore to reinforce, patterns of economic and social inequality" above all because it pays no attention to a key question, which is "who gets the benefits and who pays the costs."
Ackerman and Heinzerling are particularly concerned about how cost-benefit analysts value nature. How much will human beings pay to save an animal or a member of an endangered species? Economists have tried to answer the question by actually asking people. One study found that the average American family is willing to pay $70 to protect the spotted owl, $6 to protect the striped shriner (an endangered fish), and as much as $115 per year to protect major parks against impairment of visibility from air pollution. Ackerman and Heinzerling ridicule these numbers, complaining that any precise monetary value "contains no useful information." Bans on whaling, for example, are rooted in a widely shared ethical judgment, not in cost-benefit analysis. A democracy should base its decisions about the protection of nature on such ethical judgments, rather than by aggregating people's willingness to pay.
Ackerman and Heinzerling offer a final objection to cost-benefit analysis: the rights of future generations. As I have noted, economists generally apply a "discount rate" to future gains and losses. The greater the discount rate, the smaller the current value of future amounts. With a 7 percent discount rate, for example, $1000 in twenty years is worth only $260 today. Cost-benefit analysts within the federal government have long applied the usual discount rate for money (7 percent) to the benefits of safety and health regulation, so that the prevention of one thousand cancers in 2024 is equivalent to the prevention of 260 cancers this year. Ackerman and Heinzerling are appalled. They insist that lives are not like money; your life cannot be placed in a bank for the accumulation of interest. This claim is important, for "discounting has a dramatic shrinking effect on the perceived benefits of regulations that save lives in, or protect the environment for, the future." More broadly, Ackerman and Heinzerling contend that it is ridiculous to do what cost-benefit analysis essentially does, which is to license companies to kill people at a stated price: "Is a corporation or public agency that endangers us pardoned for its sins once it has spent $6.1 million per statistical life on risk reduction?"
With this argument, the assault on cost-benefit balancing is complete. What do Ackerman and Heinzerling want instead? The simple answer is that they want some version of the precautionary principle. We should take action against "serious threats even before there is a scientific consensus," so as to protect against the loss of lives before research has become definitive. (Global warming is a case in point.) Above all, they want regulators to adopt a "holistic" approach and to make regulatory decisions by attending to the worst-case scenario. If the worst case is really awful, aggressive regulation is desirable even if we might be wasting our money. A central question is: "How bad could the effects of dioxin, climate change, low-level radiation, and other controversial hazards turn out to be?" If we spend too much on regulatory protection, we will spend too much on safety, which admittedly is not good; but it is a lot better than catastrophe. Their "preference is to tilt toward over investment in protecting ourselves and our descendents." Ackerman and Heinzerling assert that this approach was taken in the context of the military spending of the Cold War, arguing that the nation rightly prepared "for the high-risk case." They see protection against terrorism in similar terms. They want to treat health and environmental risks in just the same way.
For both domestic and international environmental issues, Ackerman and Heinzerling also emphasize the importance of fairness. We need to know who in particular is helped and who is hurt. If environmental threats mostly burden poor people, regulators should take that effect into account. Current generations owe obligations to the future as well. Ackerman and Heinzerling note that each generation might "act like a guardian, empowered only to take care of the earth and its environment until the next generation comes of age." Most generally, they want decisions about health and safety to reflect not economists' numbers, but democratic values, chosen on moral grounds.
This is a vividly written book, punctuated by striking analogies, a good deal of outrage, and a nice dose of humor. The authors raise several good questions about cost-benefit analysis. Certainly regulators should care not only about reduced fatalities, but also about the health gains produced by regulation. They should take account of all the potential benefits, such as protection of ecosystems and animals, including members of endangered species. It is quite crude to say that every life is "worth" $6.1 million; some kinds of risk, and some kinds of death, produce heightened concern. (Cancer risks seem to create more alarm than risks of a sudden, unanticipated death.) Even if studies show that American workers are paid $60 to assume a risk of one in 100,000, it does not necessarily follow that environmental policy should require companies to spend no more than $60 to reduce similar risks associated with air and water pollution. Distributional issues matter. If poor people are hit especially hard by environmental risks, something has gone wrong.
As Ackerman and Heinzerling know, there is no easy answer to the question of how to handle risks that will not materialize soon. Almost everyone would rather eliminate a one in 100,000 risk of mortality faced today than an identical risk that will not be faced until 2024. To this extent, economists and regulators are surely correct to "discount" risks that will not come to fruition for many years. But human beings cannot be banked, and they do not earn interest. In applying the usual discount rate for money to human lives and environmental amenities, regulators have not been sufficiently reflective.
But what should be done with these criticisms? We could read Ackerman and Heinzerling to be calling for an improved and chastened form of cost benefit analysis--for an assessment that includes all benefits, not just a subset; appropriately values the future; is sensitive to issues of distribution; and is based on more accurate "translations" of social risks into dollar equivalents. But this is hardly what Ackerman and Heinzerling want. They seek to end cost-benefit analysis, not to mend it. They want to replace it with a "holistic" approach that follows the precautionary principle. And it is here that they run into trouble. In deciding what to do, it is inadequate to argue in favor of precautions. We have to know about costs and benefits, too--and we should try to be as quantitative as we possibly can.
To see the point, consider a much disputed question: how stringently to regulate arsenic in drinking water. By itself, it is unhelpful to say that regulators should be "precautionary." The real question is how precautionary to be. Should permissible levels of arsenic be reduced to twenty parts per billion, ten parts per billion, five parts per billion, or three parts per billion? As regulation becomes more stringent, it usually becomes more expensive. It also becomes more protective. (Is that a surprise?) Suppose that the most stringent alternative--three parts per billion--would be extremely expensive, producing an annual national cost of, say, $900 million and a large increase in people's water bills. Suppose, too, that if the most stringent alternative were selected, many people would have to pay $600 more each year for water. Don't costs matter? And suppose that the best evidence suggests that low levels of arsenic pose low levels of risk, so that a reduction from ten parts per billion to three parts per billion is expected to prevent no more than five deaths each year. Don't benefits matter, too? (For the sake of comparison, a recent EPA air-pollution regulation, governing particulate matter, is expected to prevent more than three hundred deaths annually.) To know what precautions to take, we had better investigate the likely costs and benefits of different courses of action--even when science does not enable us to identify them with anything like precision.
Ackerman and Heinzerling are aware of these arguments. They know that defenders of cost-benefit analysis insist that resources are limited, that policy options require trade-offs, and that we should choose the most cost-effective approaches. They respond that "resources are of course ultimately limited, but there is no evidence that we have approached the limits of what is possible (or desirable) in health and environmental protection." As an illustration, they refer to the fuel efficiency of automobiles, which can be greatly improved before we reach "the ultimate constraint." They are right; we have not reached the limit of what is possible for fuel efficiency. But in reducing risks, should we really seek those limits? If the government ordered it, I have no doubt that American automobiles could achieve much higher fuel efficiency by 2007 than they achieve today. But what would be the expense of that achievement, and how much would be gained by it? Don't we have to know not what is possible, but what is best?
Of course it is difficult and uncomfortable to assign monetary values to human lives or to risks of death. Many people find the very idea preposterous. But whenever government decides how much to reduce risks, it is implicitly assigning such values. The government has a choice about the stringency of air pollution regulations governing carbon monoxide, which contributes to many adverse health effects, including death. No one argues that government should try to eliminate carbon monoxide from the ambient air; such an effort would require the elimination of the internal combustion engines that now power most cars and trucks (not to mention fossil fuel combustion processes, on which the nation continues to depend for electricity). Any regulator will acknowledge that at some point the cost of additional risk reduction is just too high. Why not be honest about that? Ackerman and Heinzerling cannot really deny that the current system of regulation does show inexplicable and apparently unjustified patterns--large sums devoted to small risks, small sums devoted to large risks. The government's decisions will be far more transparent to the public, and far more consistent, if the government says how much it is willing to pay to prevent a risk of death. There is little to be said for hiding the ball.
In any case, the assignment of monetary values to human life can spur more regulation, not less. One of the most important environmental initiatives in the nation's history--the phase-out of lead in gasoline--was strongly supported by cost-benefit studies. Under John Graham's leadership, the Bush administration has pioneered the idea of "prompt letters," by which the Office of Management and Budget asks agencies to issue new regulations when the benefits seem to exceed the costs. A prompt letter in 2002 helps account for the government's 2003 decision to require companies to disclose levels of trans fatty acids among their "nutrition facts"--a decision that is expected to save hundreds of lives each year.
Ackerman and Heinzerling would prefer to replace cost-benefit analysis with a European-style precautionary principle. But in many contexts, that principle is worse than unhelpful; it is utterly incoherent. Risks are often found on all sides of social situation, and risk reduction itself produces risks. If the Food and Drug Administration carefully screens medicines before they come on the market, it will undoubtedly prevent some death and illness--but it will also cause death and illness, because it will delay the availability of beneficial medications. What does the precautionary principle counsel? Or suppose that the precautionary principle is invoked as a reason for banning genetic modification of food, on the ground that genetic modification creates risks to human health and to the environment. The problem here is that genetic modification of food also promises benefits to human health and to the environment--and so regulation itself runs afoul of the precautionary principle. Or return to the case of fuel efficiency. If government mandates fuel-efficient cars, manufacturers might respond by producing vehicles that are smaller, lighter, and less safe--and hence fuel-efficiency requirements seem to violate the precautionary principle. Perhaps government could require new cars to be safe as well as fuel-efficient, but then fewer people would be able to buy new cars, and hence older cars, many of them less safe and less fuel-efficient, would stay on the roads longer. Wouldn't that violate the precautionary principle?
The problem is ubiquitous. Indeed, a whole field has grown up around the idea of "risk-risk tradeoffs." (It is discussed in depth by John Graham and Jonathan Wiener in Risk vs. Risk: Tradeoffs in Protecting Health and the Environment, which appeared in 1996.) When multiple risks are involved, it makes sense to accompany cost-benefit analysis with an appreciation of the need for precautions against the worst cases. If catastrophe is possible, stringent regulation might be adopted as a form of insurance. But this judgment should be undertaken only after a careful analysis of the relevant risks and the costs of reducing them.
Ackerman and Heinzerling do not sufficiently appreciate the risk that expensive regulation will actually hurt real people. Consider their seemingly offhand remark about protection against workplace hazards: The "costs of the regulation probably would be borne by the employers who would be required to maintain safer workplaces." That is far too simple. The costs of regulation are often borne not only by "employers," but also by consumers, whose prices increase, and by workers, who might find fewer and less remunerative jobs. When government imposes large costs on "polluters," consumers and workers will probably pay part of the bill. Since this is so, it is especially important to learn how much regulation costs and how much we're getting for it.
In the end, Ackerman and Heinzerling's argument seems to me to suffer from the authors' anachronistic and even Manichaean view of the regulatory world. In their rendition, regulators can either stop evildoers from hurting people or prevent serious threats to human health and the environment. That is the right way to think about some environmental problems, to be sure--but most of the time environmental questions do not involve evildoers or sins. They involve complex questions about how to control risks that stem both from nature and from mostly beneficial products, such as automobiles, cell phones, household appliances, and electricity. In resolving those questions, we cannot rely entirely on cost-benefit analysis, but we will do a lot better, morally as well as practically, with it than without it.
Cass R. Sunstein is a contributing editor.
By Cass R. Sunstein