It's been a long time since I've read anything as frightening as AFS president Henry C. Kelly's op-ed in the New York Times. "Within a few years," he writes, "it may be possible for an inexperienced graduate student with a few thousand dollars worth of equipment to download the gene structure of smallpox, insert sequences known to increase infectiousness or lethality, and produce enough material to threaten millions of people."
Now, I don't know much about molecular biology, and I don't know if the analogy is accurate, but this scenario sounds an awful lot like one I'm familiar with: computer hacking. In both cases, a system full of vulnerabilities is subject to scrutiny by thousands of imaginative (or simply persistent) attackers. But the worst computer hackers can do is destroy data, and perhaps disrupt communications and other infrastructure. Bio-hackers can potentially kill millions.
There are some differences, of course. A biological superbug would be much more dangerous to handle, and harder to test during development, than a computer attack. (An anthrax-like pathogen that doesn't spread from person to person would be about as easy to target as a computer hack, though.) The human immune system is generally more robust than computer systems, having evolved to deal with a wide variety of pathogens over millions of years. (Of course, it has never had to deal with human ingenuity before.)
The fundamental similarity, though, is that the technology massively favors the attacker, who can choose the time, place and method of attack, over the defender, who must try to react to protect entire populations before they're infected. Frankly, I don't like our odds.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment