Mark Kleiman and Sasha Volokh are engaging in an interesting debate on the "precautionary principle" as it applies to public health policy and government regulation of technological innovation. The essence of the question at hand is: should some human actions be forsaken merely on the grounds that their unforeseen consequences might conceivably be catastrophic, even in the absence of any evidence that such a consequence has any significant likelihood of coming to pass?
Both Kleiman and Volokh (now) seem inclined to give at least some credence to the principle, on the grounds that an unknown risk at least contains a scintilla of a possibility of a disaster, and hence should be averted where possible. As Volokh puts it, "you shouldn't just compare the mean estimates of the benefits but....you should also take into account the variance, that is, figure out which of the alternatives has more uncertainty and possible unknown bad outcomes and be a little bit biased against it." Or, in Kleiman's words, "any proposal where a plausible story can be told of truly catastrophic risk (i.e., risks equivalent to substantial fractions of total national or world wealth) ought to be forbidden until the probability attached to the risk can be plausibly quantified".
I believe these arguments miss the important distinction between an unknown probability and an unknown (probabilistic) outcome (with a known probability distribution). When Volokh treats an unknown probability as a(n implicitly known) distribution with a high variance, and when Kleiman invokes the word "plausible" to implicitly ascribe a (known) non-negligible probability to a bad outcome, they are effectively contradicting their own claim of ignorance about the probability of a catastrophic event. This insinuated substitution of "known, small but non-negligible probability" for "unknown probability" is certainly intuitively tempting, but I believe it leads to an important error.
The problem is that this implicitly estimated tiny-but-significant probability is then being compared with another unknown probability--the probability of a catastrophic outcome from inaction--that has in fact been implicitly replaced with "a negligibly small quantity". This latter substitution also has an obvious intuitive appeal, of course--"don't upset the applecart", "don't ruin a good thing", etc. etc. But in practice it is almost always disastrous to avoid change completely, if one considers a long enough timescale. Certainly, if our society had refused to change over, say, the last two hundred years, then the results would have been disastrous (by comparison with our current state). This principle is even enshrined in the structure of life itself, which is built to evolve and vary genetically over time, to avoid presenting a sitting target to both the three p's (predators, pathogens, and parasites) and unexpected environmental changes.
Oddly enough, environmentalists--usually the most enthusiastic proponents of the precautionary principle--are happy to recognize the dangers of stasis when it suits them. One might have thought, for instance, that a radical reduction in the world's fossil fuel consumption might have all sorts of unexpected side-effects, some of which might even prove to be disastrous. However, the (extremely ill-understood) threat of global warming is enough to convince many that inaction is even more dangerous than action in this case. (It all depends on one's baseline notion of "inaction", I guess.)
Now, it may be that in some cases there are fairly well-defined, quantifiable threats that make a precautionary stance reasonable. (For example, by Kleiman's calculation, even a 1% chance of a deliberately induced smallpox epidemic would make pre-emptive vaccination a worthwhile countermeasure. One could easily imagine a defensible model based on real-world knowledge that placed the threat above that threshold.) But in these cases, the "precautionary principle" no longer applies, as the risks are no longer being thought of as unknown and unquantifiable. Conversely, if the risks of a given action--say, that genetic modification of plants could lead to a "superbug" that decimates humanity--are poorly understood, then they cannot automatically be assumed to be greater than the risks of failing to take that action--say, that a "superbug" will arise naturally that only genetic modification technology can prevent from decimating humanity.
In general, the human instinct to identify consistency with safety is understandable, since arbitrary changes to the status quo are much more likely to be seriously detrimental than beneficial. (Again, our genes lead the way in this respect, attempting to replicate themselves perfectly, within limits.) However, in genetics as in life, the absence of small gradual changes with no obvious harmful effects can be as disastrous as the presence of large, radical changes with unmistakable harmful effects. The precautionary principle fails to take the former danger into account, and thus deals with all changes as if they belonged in the latter category. That makes it a poor guide to evaluating, for example, the risks of new technologies.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment