Thomas Friedman, in his recent column, wonders, quite understandably, whether by "overreacting to 9/11" with excessive precautions against terrorism, we may ultimately "drive ourselves crazy long before Osama bin Laden ever does". Some of his complaints about security measures--those that are pointlessly ineffective, for instance, or protect against ludicrously tiny risks, or merely increase anxiety without improving safety at all--are perfectly valid. (Gregg Easterbrook argues, in a similar vein, that the nuclear or "dirty bomb" threat is far more serious than the much-hyped specter of a chemical- or biological-weapons attack, and offers some reasonable-sounding preparedness advice.)
But the end of Friedman's column makes a much more dubious suggestion: "Leave the cave-dwelling to Osama"--i.e., ignore the threats and enjoy life. He brags that "the only survival purchase [he's] made since Code Orange is a new set of Ben Hogan Apex irons", and encourages others to create a "survival kits" stocked with "a movie guide, a concert schedule, Rollerblades, a bicycle".
I have a sneaking suspicion that if Friedman really did perceive a danger to himself or his family--if he lived in Israel, for example--then he might not be so nonchalant about it. In fact, he ridicules "pseudo security, like when you go to Washington Wizards basketball games and they demand you open your purse or bag"--presumably knowing that in Israel, such bag-checkers have managed to greatly reduce the casualty tolls of several suicide bombings, by preventing the attackers from entering crowded shopping malls.
But the real purpose of Friedman's show of insouciance is clearly social rather than practical. The golf clubs are a tip-off; he would never have been so quick to sound off had his only purchase since the "code orange" alert been something a little more downscale--say, a new bowling ball. His display of bravado is thus obviously meant to convey a cosmopolitan, somewhat aristocratic spirit, reminding his readers of his swashbuckling, globetrotting journalist persona. Constant fear for one's security is the mark of the lower classes, after all; the wealthy and powerful, having been raised in a state of invulnerability, are expected to see themselves as above petty worries about personal safety. (I speak here specifically of threats of violence; Friedman would no doubt be far more solicitous towards more effete sensitivities--say, to pesticides or other chemicals in food, or industrial residues in the air or water. Chicken Little was middle-class; the Princess who detected a pea under her mattress most certainly was not.)
There are, we should note, thousands of American men and women about to put their lives at serious risk in combat against a dangerous enemy possessed of infinite ruthlessness and truly horrifying weaponry. Few of them, I suspect, are pompous enough to boast cavalierly about their willingness to place duty to country above personal safety. But placing golf above personal safety--now there's a courage noble enough for a fellow to trumpet to the world, and cockily challenge his readers to emulate.
Thursday, February 27, 2003
Wednesday, February 26, 2003
Katie Roiphe makes an interesting point in Slate magazine about Graham Greene's famous novel, "The Quiet American", and its recent film adaptation. The book, published in 1955, presents an American CIA agent in Vietnam through the eyes of a grizzled British expatriate who happens to be caught with him in a love triangle involving a beautiful local girl. The American character is portrayed as (in Roiphe's words) "almost every cliché of the naive American abroad that has ever amused a turtlenecked European", from "the maddening simplicity of his political beliefs to his embarrassing sexual earnestness."
Nevertheless, as Roiphe points out, the book, though vilified by Americans at the time, is far from resolutely anti-American: "Greene's exquisite condescension toward America is tinged, at every turn, with a kind of grudging appreciation.... Though the book's affection for America—its energy, its innocence, its belief in changing the rotting world—is couched in fierce criticism, the novel presents a much richer and more nuanced view of Americans abroad than it has been given credit for."
The more recent film, however, paints a much darker picture: beneath his sunny ignorance, the new "Quiet American" is eventually revealed to be a "ruthless CIA mastermind", as Roiphe puts it. This shift, she notes, mirrors the change in America's European image from then to now: once viewed as a vibrant-but-immature, strong-but-bumbling do-gooder, America is now more frequently depicted by Europeans as a kind of all-powerful, thuggishly ignorant, uniformly malevolent force.
What Roiphe never discusses is the source of this shift in European attitudes. It's hard, after all, to argue that America's behavior has changed for the worse; in fact, the America of 1955 was incomparably wealthier, more dominant, and more ruthless, with respect to the rest of the world, than the America of 2003. From Iran to Guatemala to Lebanon, 1950's America was asserting its might everywhere, toppling unfriendly regimes and installing its clients without a second thought. Cuba's open political effrontery--let alone Vietnam's brazen military challenge--was far in the future, and as yet virtually unthinkable. Domestically, Joseph McCarthy had only recently been discredited, and fierce anti-Communism was still the conventional stance at home and abroad. The civil rights movement had barely begun, and segregation was still being enforced throughout the South. The evolution of the "Quiet American" stereotype from the 1955 version to the current one is thus hard to attribute to any change in American society or geopolitics.
On the other hand, consider the contrast between (Western) Europe in 1955 and 2003. Fifty years ago, several European countries still had substantial colonial holdings, primarily in Africa. Britain was one of the world's three nuclear powers, and its (as well as France's) permanent veto-wielding membership on the UN security council seemed perfectly natural, given its colonial influence and independent global military reach. The Cold War, the overarching geopolitical conflict of the time, was being played out on European territory. The idea of the American Secretary of Defense suggesting that France and Germany might one day be eclipsed by Poland and Hungary (let alone that they already had been) would have seemed utterly absurd.
Given this striking contrast between Europe's stature in the world then and now, is it any wonder that the amused condescension of a previous era's confident Europe towards upstart America has since given way to bitter resentment and suspicion of American hegemony?
Nevertheless, as Roiphe points out, the book, though vilified by Americans at the time, is far from resolutely anti-American: "Greene's exquisite condescension toward America is tinged, at every turn, with a kind of grudging appreciation.... Though the book's affection for America—its energy, its innocence, its belief in changing the rotting world—is couched in fierce criticism, the novel presents a much richer and more nuanced view of Americans abroad than it has been given credit for."
The more recent film, however, paints a much darker picture: beneath his sunny ignorance, the new "Quiet American" is eventually revealed to be a "ruthless CIA mastermind", as Roiphe puts it. This shift, she notes, mirrors the change in America's European image from then to now: once viewed as a vibrant-but-immature, strong-but-bumbling do-gooder, America is now more frequently depicted by Europeans as a kind of all-powerful, thuggishly ignorant, uniformly malevolent force.
What Roiphe never discusses is the source of this shift in European attitudes. It's hard, after all, to argue that America's behavior has changed for the worse; in fact, the America of 1955 was incomparably wealthier, more dominant, and more ruthless, with respect to the rest of the world, than the America of 2003. From Iran to Guatemala to Lebanon, 1950's America was asserting its might everywhere, toppling unfriendly regimes and installing its clients without a second thought. Cuba's open political effrontery--let alone Vietnam's brazen military challenge--was far in the future, and as yet virtually unthinkable. Domestically, Joseph McCarthy had only recently been discredited, and fierce anti-Communism was still the conventional stance at home and abroad. The civil rights movement had barely begun, and segregation was still being enforced throughout the South. The evolution of the "Quiet American" stereotype from the 1955 version to the current one is thus hard to attribute to any change in American society or geopolitics.
On the other hand, consider the contrast between (Western) Europe in 1955 and 2003. Fifty years ago, several European countries still had substantial colonial holdings, primarily in Africa. Britain was one of the world's three nuclear powers, and its (as well as France's) permanent veto-wielding membership on the UN security council seemed perfectly natural, given its colonial influence and independent global military reach. The Cold War, the overarching geopolitical conflict of the time, was being played out on European territory. The idea of the American Secretary of Defense suggesting that France and Germany might one day be eclipsed by Poland and Hungary (let alone that they already had been) would have seemed utterly absurd.
Given this striking contrast between Europe's stature in the world then and now, is it any wonder that the amused condescension of a previous era's confident Europe towards upstart America has since given way to bitter resentment and suspicion of American hegemony?
Tuesday, February 25, 2003
As a former resident of a snow-rich region now living in a snow-starved one, I am delighted to offer a bit of advice to snowbound Israelis, in the hope that they might avoid the common mistakes of the snow-neophytes living in these parts:
Cars are actually fairly powerful machines, and can often drive through small amounts of snow at speeds even greater than normal human walking speed.
If you nevertheless suddenly feel uncomfortable attempting to drive through the current depth of snow, it is generally considered good etiquette to pull your car over to the side of the road before abandoning it in a fit of feckless panic. This rule is particularly important on major highways and thoroughfares.
If you should find your car stuck in the snow, with your cars wheels spinning furiously to no avail, it is unlikely that sitting in your car and continuing to (literally) spin your wheels for hours on end will do any good. On the other hand, a solid push to the back of the car (best administered by a passenger or passerby), in conjunction with a light tap on the accelerator, can often dislodge the car safely.
If multiple drivers find themselves stuck in close proximity, then they can assist each other in this manner to free each car in succession, thus possibly saving an important thoroughfare from becoming clogged by abandoned cars.
If you decide to ignore these suggestions, please make sure that no snow-literate people are around to notice. Otherwise, be prepared to be asked whether you are from the Seattle area.
Monday, February 24, 2003
It is often said that sex education can help reduce unwed teen motherhood, but rarely is the concept taken this literally. (If you're having trouble imagining what the experience might be like, go here and search for the word "pupils".) A similar argument is often made for drug legalization: that it would demystify drug use, depriving it of the glamor of the forbidden. Call it the "Tom Sawyer" argument, applied to sex or drugs instead of fence-painting. (As far as I know, it hasn't yet been suggested as a cure for rock 'n' roll.)
Proponents of this approach obviously didn't attend my junior high school, where a rigorous compulsory program of physical education never managed, so far as I could tell, to dissuade the jock students from their love of sports. In fact, the sex-instruction advocates are making an assumption that's even goofier than Tom Sawyer's: that endorsing activities that don't "go all the way" may dissuade teenagers from, well, going all the way. (I can just imagine how all those baseball-playing junior-high jocks would have reacted to a gym class suggesting that it's okay, really, to stop at second or third base.)
As I've mentioned before, it's a fair bet that an "educator" who makes lame arguments in favor of some self-evidently ineffective education method is not so much naive or misguided as indifferent, at best, to the method's supposed educational goal. And there are plenty of people who consider teen sex a perfectly salutary phenomenon, and see no reason even to attempt to discourage it. But the peer pressure on kids to become sexually active at a ridiculously early age is already enormous, and I shudder to think how much worse it will be for them in schools where the teachers are, if you'll pardon the expression, egging them on.
Proponents of this approach obviously didn't attend my junior high school, where a rigorous compulsory program of physical education never managed, so far as I could tell, to dissuade the jock students from their love of sports. In fact, the sex-instruction advocates are making an assumption that's even goofier than Tom Sawyer's: that endorsing activities that don't "go all the way" may dissuade teenagers from, well, going all the way. (I can just imagine how all those baseball-playing junior-high jocks would have reacted to a gym class suggesting that it's okay, really, to stop at second or third base.)
As I've mentioned before, it's a fair bet that an "educator" who makes lame arguments in favor of some self-evidently ineffective education method is not so much naive or misguided as indifferent, at best, to the method's supposed educational goal. And there are plenty of people who consider teen sex a perfectly salutary phenomenon, and see no reason even to attempt to discourage it. But the peer pressure on kids to become sexually active at a ridiculously early age is already enormous, and I shudder to think how much worse it will be for them in schools where the teachers are, if you'll pardon the expression, egging them on.
Saturday, February 22, 2003
Of all the justifications for racial preferences, "diversity" is the most beloved by preferences' defenders, because it's so wonderfully straightforward, in a question-begging way: how can the goal of "diversity" be achieved, after all, except by altering standards to ensure the right level of diversity? Mark Kleiman, in a lengthy meditation on Glenn Loury's statistical argument to that effect, comes down in favor of "arguing frankly about the terms of the tradeoff" between diversity and color-blindness. He's clearly unhappy with the current de facto quota system in university admissions, but having accepted the legitimacy of the diversity goal, he's pretty much lost the argument before he's even started.
The cleverness of the "diversity" argument is that it, like so many other arguments about race in America, hides its true meaning behind euphemism and deceptive ambiguity. In the short term, "diversity" is simply a euphemism for ensuring that a given population (say, a university student body) contains the requisite number of members of certain minority groups--i.e., quota-setting. As such, it should be easy to argue against, in the same way that the euphemism "affirmative action" hardly stops its opponents from arguing against quotas. However, diversity has a longer-term meaning that is much harder to argue against: to "value diversity" is, implicitly, to accept laudable principles like racial equality, non-discrimination against minority groups, and racial integration at all levels of society. After all, if it doesn't bother you (as it evidently does Kleiman) to see, say, a classroom with virtually no black students in it, then how can you really support these values?
And here's where the short- and long-term meanings of "diversity" can actually diverge, in an interesting--and, I believe, very telling--way. Now, I happen to be an enthusiastic supporter of all the ideals encoded into the long-term meaning of "diversity", but I vehemently oppose it in its short-term sense. My justification is simple: I have literally not the slightest doubt that color-blind policies will lead, in the long-term--and probably sooner than most people expect--to "diversity" in the short-term sense (i.e., representation levels of minority groups in, say, university populations--including elite ones--that are approximately proportional to those groups' presence in society at large). After all, eventual integration into, and mirroring of, the overall traits of the general population has been the fate of every single other racial or ethnic minority in the history of America. And while each group's history has been different (with that of African-Americans being a particularly striking anomaly), no group today can claim any conditions (and especially not levels of racism) which were not also experienced by past groups that nevertheless eventually assimilated completely. There is thus no reason to think that the future of any current minority group will be any less bright.
But if I believed differently--if I imagined that the current disparities in, say, qualification for elite colleges among racial and ethnic groups--were set in stone (or at least likely to persevere into the indefinite future), then I might conceivably feel quite differently, as well, about colorblind policies which by implication perpetuate these disparities. And it does seem as though supporters of "diversity"-based racial preferences adopt a "what else can we do" attitude that suggests a belief that these disparities will not disappear on their own. And so I ask supporters of these policies: do you, indeed, doubt what I take as self-evident--that reversion to the mean is simply a matter of time? And if so, then why do you believe that today's minority groups will fail where so many others have succeeded?
The cleverness of the "diversity" argument is that it, like so many other arguments about race in America, hides its true meaning behind euphemism and deceptive ambiguity. In the short term, "diversity" is simply a euphemism for ensuring that a given population (say, a university student body) contains the requisite number of members of certain minority groups--i.e., quota-setting. As such, it should be easy to argue against, in the same way that the euphemism "affirmative action" hardly stops its opponents from arguing against quotas. However, diversity has a longer-term meaning that is much harder to argue against: to "value diversity" is, implicitly, to accept laudable principles like racial equality, non-discrimination against minority groups, and racial integration at all levels of society. After all, if it doesn't bother you (as it evidently does Kleiman) to see, say, a classroom with virtually no black students in it, then how can you really support these values?
And here's where the short- and long-term meanings of "diversity" can actually diverge, in an interesting--and, I believe, very telling--way. Now, I happen to be an enthusiastic supporter of all the ideals encoded into the long-term meaning of "diversity", but I vehemently oppose it in its short-term sense. My justification is simple: I have literally not the slightest doubt that color-blind policies will lead, in the long-term--and probably sooner than most people expect--to "diversity" in the short-term sense (i.e., representation levels of minority groups in, say, university populations--including elite ones--that are approximately proportional to those groups' presence in society at large). After all, eventual integration into, and mirroring of, the overall traits of the general population has been the fate of every single other racial or ethnic minority in the history of America. And while each group's history has been different (with that of African-Americans being a particularly striking anomaly), no group today can claim any conditions (and especially not levels of racism) which were not also experienced by past groups that nevertheless eventually assimilated completely. There is thus no reason to think that the future of any current minority group will be any less bright.
But if I believed differently--if I imagined that the current disparities in, say, qualification for elite colleges among racial and ethnic groups--were set in stone (or at least likely to persevere into the indefinite future), then I might conceivably feel quite differently, as well, about colorblind policies which by implication perpetuate these disparities. And it does seem as though supporters of "diversity"-based racial preferences adopt a "what else can we do" attitude that suggests a belief that these disparities will not disappear on their own. And so I ask supporters of these policies: do you, indeed, doubt what I take as self-evident--that reversion to the mean is simply a matter of time? And if so, then why do you believe that today's minority groups will fail where so many others have succeeded?
Thursday, February 20, 2003
Just before the 2000 election, Slate magazine polled its staff, and discovered that support for Al Gore was overwhelming (and for Nader, substantial). Editor Michael Kinsley defended his publication against the charge of bias: "for the millionth time!—an opinion is not a bias!" But the issue was never bias, but rather balance; unlike partisan publications such as, say, The Nation or National Review, general-interest journals of politics and culture are supposed to cover the gamut of relevant points of view. When the staff is so thoroughly partisan, though, readers get--well, they get articles like "Roll Call".
Billed as a survey of "prominent people in politics, the arts, entertainment, business, and other fields" and their views on invading Iraq, the article originally presented the opinions of 27 people of an astoundingly homogeneous cast. (At least one has been added since then; perhaps more will follow.) The list includes four former Democratic Party politicians, including three former Clinton administration officials; four alumni of the Washington Monthly (a legendary liberal political journal whose staff once included Kinsley); three from the masthead of The American Prospect (a not-so-legendary liberal political/economic journal); and one each from the New York Review of Books, Dissent, and The Nation. And I haven't even mentioned nouvelle radical Arianna Huffington, labor lawyer Tom Geoghehan, or Brookings Institution scholar (and Center on Budget and Policy Priorities board member) Henry J. Aaron. The business world is represented by two individuals--one of the former Clinton administration officials and a Financier who made number 131 on the Mother Jones 400 for his nearly $300,000 in political contributions in 2000--all of it to the Democratic party.
Thus without double-counting (Robert Reich is both a Clinton man and an American Prospect co-founder), the 27 include a total of 17 explicitly self-identified liberal, leftist or Democratic Party-affiliated personalities (63%)--for all but one of whom the identification was, at one time at least, in an official capacity. In contrast, there is one self-identified conservative in the list (Peggy Noonan), and three others who may or may not call themselves conservative, but who are frequently labeled as such (Heather MacDonald, John McWhorter, and Charles Murray). The remaining 6 include a novelist, an essayist, filmmaker Spike Lee, a comedy writer, and two military veterans-turned-journalists.
Now, it is certainly conceivable that such a politically monochromatic group (ranging from centrish to leftish liberal Democrats, with a few outliers) could have produced a breathtaking array of original and varied insights into the question at hand. In fact, though, they did not; the generally canned-sounding responses added staggeringly little to the regular Slate ruminations of Fred Kaplan (a former staffer for the late Democrat Rep. Les Aspin), Mickey Kaus (another Washington Monthly alumnus), or, for that matter, Kinsley himself.
If Slate were an openly partisan rag, it surely would never have dared inflict on its fellow loyalist readers such a collection of bland, conventional, uninspiring commentary. Its pages would instead be filled with either sparkling internal debate or, if consensus had been reached, rousing calls to the faithful. And if Slate were at all serious about its professed political ecumenicalism, it would have presented a rich, eye-catching smorgasbord of views from all across the political and social spectrum. But because it's the worst of both worlds--a liberal stronghold masquerading as a pluralist publication--it can neither rally nor provoke; it can only bore.
Billed as a survey of "prominent people in politics, the arts, entertainment, business, and other fields" and their views on invading Iraq, the article originally presented the opinions of 27 people of an astoundingly homogeneous cast. (At least one has been added since then; perhaps more will follow.) The list includes four former Democratic Party politicians, including three former Clinton administration officials; four alumni of the Washington Monthly (a legendary liberal political journal whose staff once included Kinsley); three from the masthead of The American Prospect (a not-so-legendary liberal political/economic journal); and one each from the New York Review of Books, Dissent, and The Nation. And I haven't even mentioned nouvelle radical Arianna Huffington, labor lawyer Tom Geoghehan, or Brookings Institution scholar (and Center on Budget and Policy Priorities board member) Henry J. Aaron. The business world is represented by two individuals--one of the former Clinton administration officials and a Financier who made number 131 on the Mother Jones 400 for his nearly $300,000 in political contributions in 2000--all of it to the Democratic party.
Thus without double-counting (Robert Reich is both a Clinton man and an American Prospect co-founder), the 27 include a total of 17 explicitly self-identified liberal, leftist or Democratic Party-affiliated personalities (63%)--for all but one of whom the identification was, at one time at least, in an official capacity. In contrast, there is one self-identified conservative in the list (Peggy Noonan), and three others who may or may not call themselves conservative, but who are frequently labeled as such (Heather MacDonald, John McWhorter, and Charles Murray). The remaining 6 include a novelist, an essayist, filmmaker Spike Lee, a comedy writer, and two military veterans-turned-journalists.
Now, it is certainly conceivable that such a politically monochromatic group (ranging from centrish to leftish liberal Democrats, with a few outliers) could have produced a breathtaking array of original and varied insights into the question at hand. In fact, though, they did not; the generally canned-sounding responses added staggeringly little to the regular Slate ruminations of Fred Kaplan (a former staffer for the late Democrat Rep. Les Aspin), Mickey Kaus (another Washington Monthly alumnus), or, for that matter, Kinsley himself.
If Slate were an openly partisan rag, it surely would never have dared inflict on its fellow loyalist readers such a collection of bland, conventional, uninspiring commentary. Its pages would instead be filled with either sparkling internal debate or, if consensus had been reached, rousing calls to the faithful. And if Slate were at all serious about its professed political ecumenicalism, it would have presented a rich, eye-catching smorgasbord of views from all across the political and social spectrum. But because it's the worst of both worlds--a liberal stronghold masquerading as a pluralist publication--it can neither rally nor provoke; it can only bore.
Wednesday, February 19, 2003
TNR contributor Lawrence Kaplan has written an op-ed in the Washington Post essentially accusing of anti-Semitism those writers--including Pat Buchanan, Georgie Ann Geyer, Chris Matthews, Robert Novak, and several other lesser-knowns--who blame the Bush administration's militancy against Iraq on the alleged pro-Israel loyalties of an influential group of Jewish "neoconservatives" in the Bush camp. Mickey Kaus calls Kaplan's claim "disingenuous and self-refuting", and attributes some merit to the "dual loyalties" charge. (He also points out a Post analysis piece by Robert Kaiser that pretty much endorses the Buchanan-Geyer-Matthews-Novak view.)
Now, nearly every "anti-war" demonstration on the planet these days is replete with banners egging on the terrorist campaign against Israel. And several prominent American foreign-policy luminaries openly link their opposition to an American attack on Iraq with their desire that the US concentrate on pressuring Israel to make concessions to the Palestinians. Under these circumstances, it's certainly odd that somehow the Jews in the Bush administration are the ones whom Kaus has chosen to suspect of loyalty to a foreign cause.
Still, Kaplan's accusation of anti-Semitism is, to be blunt, a misguided and superfluous ad hominem attack. Perhaps some of his targets are anti-Semitic, and perhaps they are not; but their position on the war on Iraq, or on neoconservatives, is pretty weak evidence one way or the other. For example, Kaus and Joe Klein both take the "dual loyalty" charge seriously, and presumably neither of them is a rabid anti-Semite.
The real problem with the Kaus/Klein thesis, though, is that it is every bit as superfluous and irrelevant an ad hominem attack as Kaplan's original anti-Semitism accusation. Perhaps Paul Wolfowitz, Richard Perle and Douglas Feith are deep-cover moles for the Mossad--and perhaps they're not. Perhaps Mickey Kaus and Joe Klein are in Yasser Arafat's pay--and perhaps not. Perhaps Martin Luther King Jr. had Communist sympathies (he did), Nelson Mandela was a terrorist leader (he was), and Margaret Sanger (the founder of Planned Parenthood) was a eugenics enthusiast (she was). But apart from historical interest, these details of the unsavory motivations of certain leaders have no bearing on the justice of the causes which they led. And opponents of those causes who treat the leader's motivations as evidence against the cause (as has often, in fact, been done) are merely demonstrating their inability to mount serious arguments for their oppositional case.
The motives of the Buchanans and Matthews and Novaks and Kauses and Kleins of the world are probably more complex than mere reflexively anti-Israel partisanship--let alone anti-Semitism. Buchanan and Novak do, indeed, have long histories of demonizing Israel, often most unfairly. Matthews, on the other hand, cultivates a working-class Catholic populist perspective, and tends to enjoy picking fights with elite intellectual types like the Pentagon's cerebral neoconservatives. Klein's point of view, judging by his writing, is that of an unreconstructed Peres/Beilin-style Israeli dove, to whom American Jewish neoconservatives' real crime is not support for Israel but rather support for Sharon's hawkish Likud party. And Kaus is an inveterate liberal contrarian who loves taking stances that his ostensible political allies would find irritating.
But their basic argument--that attacking Iraq is bad because high-ranking Pentagon officials are motivated by staunch pro-Zionist views--is self-evidently vacuous. And their corollary--that the US should back off Iraq and start putting pressure on Israel to capitulate to Palestinian demands--is also just plain foolish, for reasons I've amply detailed elsewhere. Moreover, it remains irredeemably dopey irrespective of the loyalties of either the view's proponents themselves, or of the Pentagon officials they excoriate.
Now, nearly every "anti-war" demonstration on the planet these days is replete with banners egging on the terrorist campaign against Israel. And several prominent American foreign-policy luminaries openly link their opposition to an American attack on Iraq with their desire that the US concentrate on pressuring Israel to make concessions to the Palestinians. Under these circumstances, it's certainly odd that somehow the Jews in the Bush administration are the ones whom Kaus has chosen to suspect of loyalty to a foreign cause.
Still, Kaplan's accusation of anti-Semitism is, to be blunt, a misguided and superfluous ad hominem attack. Perhaps some of his targets are anti-Semitic, and perhaps they are not; but their position on the war on Iraq, or on neoconservatives, is pretty weak evidence one way or the other. For example, Kaus and Joe Klein both take the "dual loyalty" charge seriously, and presumably neither of them is a rabid anti-Semite.
The real problem with the Kaus/Klein thesis, though, is that it is every bit as superfluous and irrelevant an ad hominem attack as Kaplan's original anti-Semitism accusation. Perhaps Paul Wolfowitz, Richard Perle and Douglas Feith are deep-cover moles for the Mossad--and perhaps they're not. Perhaps Mickey Kaus and Joe Klein are in Yasser Arafat's pay--and perhaps not. Perhaps Martin Luther King Jr. had Communist sympathies (he did), Nelson Mandela was a terrorist leader (he was), and Margaret Sanger (the founder of Planned Parenthood) was a eugenics enthusiast (she was). But apart from historical interest, these details of the unsavory motivations of certain leaders have no bearing on the justice of the causes which they led. And opponents of those causes who treat the leader's motivations as evidence against the cause (as has often, in fact, been done) are merely demonstrating their inability to mount serious arguments for their oppositional case.
The motives of the Buchanans and Matthews and Novaks and Kauses and Kleins of the world are probably more complex than mere reflexively anti-Israel partisanship--let alone anti-Semitism. Buchanan and Novak do, indeed, have long histories of demonizing Israel, often most unfairly. Matthews, on the other hand, cultivates a working-class Catholic populist perspective, and tends to enjoy picking fights with elite intellectual types like the Pentagon's cerebral neoconservatives. Klein's point of view, judging by his writing, is that of an unreconstructed Peres/Beilin-style Israeli dove, to whom American Jewish neoconservatives' real crime is not support for Israel but rather support for Sharon's hawkish Likud party. And Kaus is an inveterate liberal contrarian who loves taking stances that his ostensible political allies would find irritating.
But their basic argument--that attacking Iraq is bad because high-ranking Pentagon officials are motivated by staunch pro-Zionist views--is self-evidently vacuous. And their corollary--that the US should back off Iraq and start putting pressure on Israel to capitulate to Palestinian demands--is also just plain foolish, for reasons I've amply detailed elsewhere. Moreover, it remains irredeemably dopey irrespective of the loyalties of either the view's proponents themselves, or of the Pentagon officials they excoriate.
Monday, February 17, 2003
Mark Kleiman and the Volokh Conspiracy's pseudonymous "Philippe de Croy" agree that the recent "code orange" terrorist alert issued by the Department of Homeland Security was a pointless exercise. As "De Croy" put it, "the costs created by the warnings -- in anxiety, distraction, and wasted time -- greatly exceeded any expected benefits from them."
I think both are seriously underestimating those benefits. First of all, warnings like the current one might actually be correct, in the sense of having been been triggered by observations that are in fact signs of a real, specific, imminent attack. In that case, it is quite possible that a terrorist cell, suspecting that the authorities may know too much about them, might decide to postpone the attack for fear that their plans may be compromised. (Note that this result may occur even if the authorities don't actually have any information about the particular plan in question, but have issued the alert based on an entirely distinct set of clues.) The alert could thus, all by itself, end up saving many lives. Moreover, such a change of plans on the part of the terrorists--or even consideration thereof--may involve activities, such as contact with leaders abroad, financial rearrangements, or movement of personnel, that may expose them to detection.
Of course, the terrorists may decide not to change their plans at all. In that case, an alert probably increases (albeit only slightly) the likelihood that the attack will be detected or foiled by security officers or ordinary citizens exercising an extra degree of vigilance. Certainly it's hard to argue that in such a case, the alert was pointless.
But what if there's no attack planned at all? Even then, alerts can be useful (if they're not too frequent). Again, they provide an opportunity to "tweak" the terrorist network and see how it responds. Events (activations of particular communications channels, for instance) that correlate well with alerts can be noted and investigated for possible intelligence value.
Finally--and here is where I disagree most directly with Kleiman and "de Croy"--there is the matter of the effect of alerts on public attitudes. Both gentlemen agree that, as Kleiman puts it, "it would help if the population cultivated an attitude of calm rather than one of panic." Well, I can think of nothing more likely to reduce the public's level of panic than frequent warnings that force people to confront, accept, and eventually learn to live with the everpresent possibility of another attack. As I noted previously in a discussion of the DC sniper, when it comes to terrorist threats--or low-grade dangers in general, for that matter--familiarity tends to breed contempt. Holding off on warnings may, in the short run, reduce society's overall level of worry, but at the cost of making the next attack (God forbid) even more traumatic. The small (and surely declining) flurries of panic we experience now when alerts are sounded should pay off later in the form of more orderly public responses to terror threats in the future.
I think both are seriously underestimating those benefits. First of all, warnings like the current one might actually be correct, in the sense of having been been triggered by observations that are in fact signs of a real, specific, imminent attack. In that case, it is quite possible that a terrorist cell, suspecting that the authorities may know too much about them, might decide to postpone the attack for fear that their plans may be compromised. (Note that this result may occur even if the authorities don't actually have any information about the particular plan in question, but have issued the alert based on an entirely distinct set of clues.) The alert could thus, all by itself, end up saving many lives. Moreover, such a change of plans on the part of the terrorists--or even consideration thereof--may involve activities, such as contact with leaders abroad, financial rearrangements, or movement of personnel, that may expose them to detection.
Of course, the terrorists may decide not to change their plans at all. In that case, an alert probably increases (albeit only slightly) the likelihood that the attack will be detected or foiled by security officers or ordinary citizens exercising an extra degree of vigilance. Certainly it's hard to argue that in such a case, the alert was pointless.
But what if there's no attack planned at all? Even then, alerts can be useful (if they're not too frequent). Again, they provide an opportunity to "tweak" the terrorist network and see how it responds. Events (activations of particular communications channels, for instance) that correlate well with alerts can be noted and investigated for possible intelligence value.
Finally--and here is where I disagree most directly with Kleiman and "de Croy"--there is the matter of the effect of alerts on public attitudes. Both gentlemen agree that, as Kleiman puts it, "it would help if the population cultivated an attitude of calm rather than one of panic." Well, I can think of nothing more likely to reduce the public's level of panic than frequent warnings that force people to confront, accept, and eventually learn to live with the everpresent possibility of another attack. As I noted previously in a discussion of the DC sniper, when it comes to terrorist threats--or low-grade dangers in general, for that matter--familiarity tends to breed contempt. Holding off on warnings may, in the short run, reduce society's overall level of worry, but at the cost of making the next attack (God forbid) even more traumatic. The small (and surely declining) flurries of panic we experience now when alerts are sounded should pay off later in the form of more orderly public responses to terror threats in the future.
Saturday, February 15, 2003
Reading an excerpt from Mona Charen's new book on the Cold War, one could easily get the impression that pacifist appeasers of the Soviet Union dominated the US throughout the eighties. After a cursory acknowledgement that "millions....were persuaded that [president Ronald] Reagan's view of the conflict between East and West was correct," Charen indulges in a long, gleeful evisceration of that era's liberals and their muddleheaded woolly-mindedness about the "evil empire". While not inaccurate, her description underplays the crucial context that helps explain a lot of the silliness: Reagan was the president, his suspicion of the Soviets was the majority (if not necessarily the elite) view, and soft-on-Communism liberals were comfortable drifting as far into appeasement as they did in no small part because the country's, and the government's, resolution protected them.
I am reminded of this important distinction by French ambassador Jean-David Levitte's New York Times op-ed explaining French resistance to the American drive for war against Saddam Hussein. "[F]ar and away the biggest threat to world peace and stability," he writes, is "Al Qaeda". Moreover, "Iraq is not viewed as an immediate threat. Saddam Hussein is in a box." Finally, "Europeans consider North Korea a greater threat."
Now, it would be easy to laugh at this analysis; after all, why would France, which did precious little to uproot Al Qaida from Afghanistan, has for years been among the foremost advocates of releasing Saddam Hussein from his "box", and is not about to lift a finger to defend itself or anyone else against North Korea's "threat", feel entitled to tell America which French problems to solve first?
But such indignance misses the point: European opponents of American assertiveness generally feel safe indulging in a bit of arrogant moral and strategic preening precisely because they know they can rely on the US to do what needs to be done. And if America were to abdicate its leadership role, Europe might well fill the void with a little "unilateralist" assertiveness of its own. (Consider, for instance, West Africa, where America's docile policy of non-interference in the region has led the French to respond with....military intervention.)
A year ago, writing about the war on terrorism, I described Europe as America's "tempestuous girlfriend", and argued that "US unilateralism is thus, in the end, preferable for everyone". I think my posting stands up pretty well in retrospect. Sure, Europe stamps her feet and shouts, "no, no" now--but later, after Saddam Hussein is safely disposed of, and the lights are turned down....
I am reminded of this important distinction by French ambassador Jean-David Levitte's New York Times op-ed explaining French resistance to the American drive for war against Saddam Hussein. "[F]ar and away the biggest threat to world peace and stability," he writes, is "Al Qaeda". Moreover, "Iraq is not viewed as an immediate threat. Saddam Hussein is in a box." Finally, "Europeans consider North Korea a greater threat."
Now, it would be easy to laugh at this analysis; after all, why would France, which did precious little to uproot Al Qaida from Afghanistan, has for years been among the foremost advocates of releasing Saddam Hussein from his "box", and is not about to lift a finger to defend itself or anyone else against North Korea's "threat", feel entitled to tell America which French problems to solve first?
But such indignance misses the point: European opponents of American assertiveness generally feel safe indulging in a bit of arrogant moral and strategic preening precisely because they know they can rely on the US to do what needs to be done. And if America were to abdicate its leadership role, Europe might well fill the void with a little "unilateralist" assertiveness of its own. (Consider, for instance, West Africa, where America's docile policy of non-interference in the region has led the French to respond with....military intervention.)
A year ago, writing about the war on terrorism, I described Europe as America's "tempestuous girlfriend", and argued that "US unilateralism is thus, in the end, preferable for everyone". I think my posting stands up pretty well in retrospect. Sure, Europe stamps her feet and shouts, "no, no" now--but later, after Saddam Hussein is safely disposed of, and the lights are turned down....
Friday, February 14, 2003
The New York Times op-ed page is an endless source of amusement; the latest example is Nancy Soderberg's column on dealing with North Korea. Now, I've already pointed out that "negotiating with North Korea" is a meaningless phrase, given that country's unblemished record of ignoring any agreements it negotiates. But the storied history of that record, and the confusion it engenders in some foreign policy experts, is on spectacular display in Soderberg's opinion piece.
"The Bush administration should look more closely at the history of negotiations with North Korea," writes Soderberg. What would it see?
"The Bush administration should look more closely at the history of negotiations with North Korea," writes Soderberg. What would it see?
"[T]o encourage the North Koreans to join the Nuclear Nonproliferation Treaty, the Soviet Union offered to build them four nuclear reactors in the late 1970's and early 1980's. While the reactors were never built, North Korea did join the treaty in 1985.And the conclusion?
"In 1989, the United States learned that North Korea might be processing nuclear material, thus violating the treaty. So President George H. W. Bush struck his own deal....In exchange for North Korea's agreeing to let the atomic energy agency monitor and inspect its nuclear facilities, he withdrew United States nuclear weapons from the Korean Peninsula, canceled the annual joint United States-South Korea military exercise and agreed to a high-level meeting with North Korean officials....
"In 1993, the Clinton administration discovered that the North Koreans were cheating on this deal, too. A year later, the administration signed what became known as the Agreed Framework, under which North Korea's plutonium-based nuclear program was frozen and the international community provided North Korea with light-water reactors and fuel oil while the reactors were being built. North Korea violated the framework several years later by starting a uranium-based nuclear program....."
"If he has learned from history, Mr. Bush will negotiate directly with the North Koreans. In exchange for an end to both of North Korea's nuclear programs and tougher inspections, he will need to put new incentives on the table...."Well, somebody should certainly look more closely at the history of negotiations with North Korea.
Thursday, February 13, 2003
William Safire is ranting about the Pentagon's "Total information Awareness" (TIA) program again (as he's done before). Heather MacDonald does a good job of skewering his absurd alarmism; if her explanation of TIA is correct, then the Senate has actually forbidden law enforcement authorities to apply to their own databases approximately the same techniques used by credit card companies to sniff out potential cases of fraud.
Safire and his fellow TIA opponents speak out in defense of "privacy", a notoriously vague term whose de facto working definition is, "preventing information about me from getting into the hands of people whose motives I tend to suspect". But "privacy"-defenders often have a much more specific set of scenarios in mind, in which a large, powerful institution (usually a government or commercial enterprise) obtains information about an individual whose revelation shouldn't have adverse consequences (in the eyes of privacy advocates, at least), but nonetheless might. (Popular examples of such information typically involve evidence of drug use, sexual unconventionality or political heterodoxy, but may include more innocuous data such as consumer purchasing habits.) Such revelation can lead to "misuse" of private information--anything from public exposure to annoying direct marketing to merely profiting from knowledge of aggregate statistics. Discussion of the use of private information to achieve generally laudable goals--catching terrorists or violent criminals, for instance--is inconvenient for privacy advocates, and they therefore prefer to shift the debate back to "misuse" scenarios.
The problem with this "Big Brother" view of privacy, though, is that its adherents are probably "fighting the last war", instead of anticipating the next one. And the major next-generation threat to "privacy", I believe, can be summed up in one word: Google.
Google and its undoubtedly more powerful future information-gathering counterparts threaten the very core of the privacy advocacy community's "Big Brother" worldview, by undermining the power asymmetry lying at the heart of it. The Freedom of Information Act and the Privacy Act, for example, are both beloved of privacy advocates, though one explicitly mandates the public revelation of a great deal of private information (by a powerful entity--the government) and the other requires that private information not be disclosed (about a powerless entity--an individual citizen). What happens, though, when the information-gathering "power" of an individual becomes as great as that of an organizational behemoth like a government agency or large corporation?
Privacy advocacy first gained national prominence in the sixties, when large computer databases began giving their owners the power to do sophisticated sorting and searching of voluntarily-supplied personal information. Today, though, an individual armed with a PC and an Internet connection can gather and manipulate more information, about more individuals, and with greater sophistication, than any corporation could ever have thought possible forty years ago. And the technology is still in its infancy; true large-scale TIA-style data-mining of, for, and by the masses will likely be widely available in the not-too-distant future. We will then have created a huge "global village" of nosy neighbors, in which everyone can find out just about anything about anyone. What, then, will "privacy"--let alone "protecting privacy"--even mean anymore?
Safire and his fellow TIA opponents speak out in defense of "privacy", a notoriously vague term whose de facto working definition is, "preventing information about me from getting into the hands of people whose motives I tend to suspect". But "privacy"-defenders often have a much more specific set of scenarios in mind, in which a large, powerful institution (usually a government or commercial enterprise) obtains information about an individual whose revelation shouldn't have adverse consequences (in the eyes of privacy advocates, at least), but nonetheless might. (Popular examples of such information typically involve evidence of drug use, sexual unconventionality or political heterodoxy, but may include more innocuous data such as consumer purchasing habits.) Such revelation can lead to "misuse" of private information--anything from public exposure to annoying direct marketing to merely profiting from knowledge of aggregate statistics. Discussion of the use of private information to achieve generally laudable goals--catching terrorists or violent criminals, for instance--is inconvenient for privacy advocates, and they therefore prefer to shift the debate back to "misuse" scenarios.
The problem with this "Big Brother" view of privacy, though, is that its adherents are probably "fighting the last war", instead of anticipating the next one. And the major next-generation threat to "privacy", I believe, can be summed up in one word: Google.
Google and its undoubtedly more powerful future information-gathering counterparts threaten the very core of the privacy advocacy community's "Big Brother" worldview, by undermining the power asymmetry lying at the heart of it. The Freedom of Information Act and the Privacy Act, for example, are both beloved of privacy advocates, though one explicitly mandates the public revelation of a great deal of private information (by a powerful entity--the government) and the other requires that private information not be disclosed (about a powerless entity--an individual citizen). What happens, though, when the information-gathering "power" of an individual becomes as great as that of an organizational behemoth like a government agency or large corporation?
Privacy advocacy first gained national prominence in the sixties, when large computer databases began giving their owners the power to do sophisticated sorting and searching of voluntarily-supplied personal information. Today, though, an individual armed with a PC and an Internet connection can gather and manipulate more information, about more individuals, and with greater sophistication, than any corporation could ever have thought possible forty years ago. And the technology is still in its infancy; true large-scale TIA-style data-mining of, for, and by the masses will likely be widely available in the not-too-distant future. We will then have created a huge "global village" of nosy neighbors, in which everyone can find out just about anything about anyone. What, then, will "privacy"--let alone "protecting privacy"--even mean anymore?
Wednesday, February 12, 2003
Does oil undermine democracy? Energy consultant J. Robinson West, writing in the Washington Post, thinks so; Oxblog's David Adesnik appears to agree. West sums up the case as follows: "Nearly every country with an economy dominated by oil is corrupt and dictatorial, whether in Latin America, Africa, the Caspian, Southeast Asia or the Middle East. The notable exception is Norway."
I am reminded of Amy Chua's similar confusion of cause and effect regarding ethnic conflict. Chua, you will recall, argued that the advent of free-market democracy in some countries leads to the rise of "market-dominant minorities", which in turn incites ethnic hatred, often culminating in brutal violence against the successful minority. (I offer a hearty ICBW welcome to the literally hundreds of Googlers who found this blog while searching for "Amy Chua"--presumably after seeing her hawk her book on C-SPAN's "Booknotes" this past weekend.)
Unfortunately, Chua got it exactly backwards: societies lacking in the kind of social (including interethnic) peace and civility associated with modern, educated free-market democracies also tend to be so economically underdeveloped as to depend on a tiny (usually immigrant) minority for nearly all their economic activity--hence the "market-dominant minority", and the resentment it arouses. Societies not prone to slaughtering their successful subcultures, on the other hand, have typically figured out how to advance economically to the point where no single subculture can dominate the economy.
Similarly, oil is not inherently disruptive of responsible democracy; not only has Norway's democracy survived an oil boom intact, but so has democracy in Mexico, Western Canada and (of course) various regions of the US, such as Alaska. Now, these countries' economies aren't as completely oil-dominated as those of, say, Saudi Arabia or Nigeria--but that's primarily a reflection of their already-modern, developed societies, not of their oil or the revenue it earns. That is, their relatively advanced economic and political development make their oil industries a less dominant economic and political factor--not vice versa.
Venezuela is a striking example; its multi-year decline from corrupt but functioning democracy to populist near-dictatorship is obviously uncorrelated with its steady oil revenue over that period. Rather, the strain on its democratic institutions is a product of a huge, growing, and increasingly dissatisfied impoverished class, ripe for exploitation by a charismatic demagogue like the current president. Had it had no oil at all, Venezuela's poverty (and civil strife) problem would surely have been, if anything, even more severe, and its democracy thus even more imperiled.
Likewise, the backward oil satrapies of the Middle East, Africa, and Central Asia are generally no more (nor less) corrupt or dictatorial than their petroleum-deprived neighbors. They may be unable, because of their economic and political underdevelopment, to put their oil windfall to good use, but neither are they harmed the way West and Adesnik would have us believe. For each such country, after all, there is a nearby one with every bit as much corruption, poverty and repression, but no spectacular resource income to waste. How, then, can oil be to blame?
I am reminded of Amy Chua's similar confusion of cause and effect regarding ethnic conflict. Chua, you will recall, argued that the advent of free-market democracy in some countries leads to the rise of "market-dominant minorities", which in turn incites ethnic hatred, often culminating in brutal violence against the successful minority. (I offer a hearty ICBW welcome to the literally hundreds of Googlers who found this blog while searching for "Amy Chua"--presumably after seeing her hawk her book on C-SPAN's "Booknotes" this past weekend.)
Unfortunately, Chua got it exactly backwards: societies lacking in the kind of social (including interethnic) peace and civility associated with modern, educated free-market democracies also tend to be so economically underdeveloped as to depend on a tiny (usually immigrant) minority for nearly all their economic activity--hence the "market-dominant minority", and the resentment it arouses. Societies not prone to slaughtering their successful subcultures, on the other hand, have typically figured out how to advance economically to the point where no single subculture can dominate the economy.
Similarly, oil is not inherently disruptive of responsible democracy; not only has Norway's democracy survived an oil boom intact, but so has democracy in Mexico, Western Canada and (of course) various regions of the US, such as Alaska. Now, these countries' economies aren't as completely oil-dominated as those of, say, Saudi Arabia or Nigeria--but that's primarily a reflection of their already-modern, developed societies, not of their oil or the revenue it earns. That is, their relatively advanced economic and political development make their oil industries a less dominant economic and political factor--not vice versa.
Venezuela is a striking example; its multi-year decline from corrupt but functioning democracy to populist near-dictatorship is obviously uncorrelated with its steady oil revenue over that period. Rather, the strain on its democratic institutions is a product of a huge, growing, and increasingly dissatisfied impoverished class, ripe for exploitation by a charismatic demagogue like the current president. Had it had no oil at all, Venezuela's poverty (and civil strife) problem would surely have been, if anything, even more severe, and its democracy thus even more imperiled.
Likewise, the backward oil satrapies of the Middle East, Africa, and Central Asia are generally no more (nor less) corrupt or dictatorial than their petroleum-deprived neighbors. They may be unable, because of their economic and political underdevelopment, to put their oil windfall to good use, but neither are they harmed the way West and Adesnik would have us believe. For each such country, after all, there is a nearby one with every bit as much corruption, poverty and repression, but no spectacular resource income to waste. How, then, can oil be to blame?
Monday, February 10, 2003
Andrew Sullivan and Thomas Friedman have both come out in favor of a "mend it, don't end it" approach to the UN. Sullivan suggests, approvingly, that US Secretary of State Colin Powell's newfound militance regarding Iraq "is as much about rescuing the U.N. as it is about protecting Western citizens from Saddam's nerve gas." Friedman argues for replacing France with India in the Security Council, because "India is just so much more serious than France these days", particularly with respect to Iraq.
Multilateralism is like "world peace" or any other unrealistic ideal: everyone pays lip service to it, but most people embrace it precisely to the extent that it advances their own goals and interests, and are happy to discard it when it becomes inconvenient. The French and German governments, for instance, are quite fond of multilateralism when it reins in American power, but much less so when it reins in French and German power--as their reaction to the letter of dissent from eight European countries amply demonstrated. Friedman and Sullivan are hoping to "revive" multilateralism by "encouraging" the world's nations to "do what needs to be done"--in a multilateral, consensus-building way, of course. In other words, they don't really support multilateralism at all; they just wish the world's nations would all "multilaterally" agree with them on how to proceed.
The reason why even multilateralism's supporters, like Friedman and Sullivan, end up being its fair-weather friends is that at multilateralism's heart is a completely absurd premise: that the world's nations form a community, in the same sense that the individual inhabitants of particular locale do. In fact, human communities are made up of people who by natural instinct recognize their own individual helplessness and are inclined to cooperate with their kin--and by extension, with their fellow nationals--for the sake of their own survival. Nations, on the other hand, are largely autonomous units whose populations are instinctively hostile towards distant foreigners, and whose leaders are--each and every one of them--either non-democratic despots motivated solely by the will to power, or else democratically elected leaders sworn to advance their own voters' parochial, selfish interests tirelessly and ruthlessly at every opportunity. To expect such a community of governments to cooperate village-style is simply a foolish fantasy.
None of this implies, of course, that cooperation among nations is impossible. What it does imply, though, is that such cooperation must be based on a clear, explicit and well-understood commonality of interests, not some fond commitment to a higher ideal of global comity. Defense treaty organizations like (the old postwar) NATO, economic organizations like the EU, NAFTA and the WTO, and even arms control treaties, properly verified, can work just fine provided that they are perceived by all the parties involved as mutually beneficial. Where conflicts of interest exist--and they inevitably will, in the ever-shifting world of geopolitical diplomacy--multilateralism breaks down, and simply cannot be repaired. The UN, founded as it is on its false premise of a world community bound by more than mere selfish state interests, has been a horrible failure from day one, and no amount of harsh speeches by American cabinet members, or musical-chairs games with Security Council memberships, can save it.
Multilateralism is like "world peace" or any other unrealistic ideal: everyone pays lip service to it, but most people embrace it precisely to the extent that it advances their own goals and interests, and are happy to discard it when it becomes inconvenient. The French and German governments, for instance, are quite fond of multilateralism when it reins in American power, but much less so when it reins in French and German power--as their reaction to the letter of dissent from eight European countries amply demonstrated. Friedman and Sullivan are hoping to "revive" multilateralism by "encouraging" the world's nations to "do what needs to be done"--in a multilateral, consensus-building way, of course. In other words, they don't really support multilateralism at all; they just wish the world's nations would all "multilaterally" agree with them on how to proceed.
The reason why even multilateralism's supporters, like Friedman and Sullivan, end up being its fair-weather friends is that at multilateralism's heart is a completely absurd premise: that the world's nations form a community, in the same sense that the individual inhabitants of particular locale do. In fact, human communities are made up of people who by natural instinct recognize their own individual helplessness and are inclined to cooperate with their kin--and by extension, with their fellow nationals--for the sake of their own survival. Nations, on the other hand, are largely autonomous units whose populations are instinctively hostile towards distant foreigners, and whose leaders are--each and every one of them--either non-democratic despots motivated solely by the will to power, or else democratically elected leaders sworn to advance their own voters' parochial, selfish interests tirelessly and ruthlessly at every opportunity. To expect such a community of governments to cooperate village-style is simply a foolish fantasy.
None of this implies, of course, that cooperation among nations is impossible. What it does imply, though, is that such cooperation must be based on a clear, explicit and well-understood commonality of interests, not some fond commitment to a higher ideal of global comity. Defense treaty organizations like (the old postwar) NATO, economic organizations like the EU, NAFTA and the WTO, and even arms control treaties, properly verified, can work just fine provided that they are perceived by all the parties involved as mutually beneficial. Where conflicts of interest exist--and they inevitably will, in the ever-shifting world of geopolitical diplomacy--multilateralism breaks down, and simply cannot be repaired. The UN, founded as it is on its false premise of a world community bound by more than mere selfish state interests, has been a horrible failure from day one, and no amount of harsh speeches by American cabinet members, or musical-chairs games with Security Council memberships, can save it.
Saturday, February 08, 2003
France and Germany are rumored to be developing a "disarmament plan" for Iraq to be presented to the UN Security Council. The plan, "designed to avoid war", would involve a "tripling of the number of weapons inspectors" and "deployment of UN soldiers throughout the country" as, in effect, human shields.
One can just picture President Chirac and Chancellor Schroeder on the tarmac at Baghdad Airport, beaming with pride, as a huge throng of UN personnel parade past in formation, whistling the "Colonel Bogey" March....
One can just picture President Chirac and Chancellor Schroeder on the tarmac at Baghdad Airport, beaming with pride, as a huge throng of UN personnel parade past in formation, whistling the "Colonel Bogey" March....
Michael Kelly effusively praises the Bush administration's proposal for a massive AIDS-fighting program for Africa, and calls for even more. "The drugs that are saving the lives of thousands upon thousands of Americans can do the same for millions upon millions of Africans," he writes. "Do more. Up the ante. Make that $15 billion $30 billion. Do it now. Save 10 million lives."
Now I'm no medical expert--and I invite knowledgeable readers to correct me if I'm wrong--but this whole plan sounds to me like a recipe for utter disaster. As I understand it, antiviral medications are somewhat like antibiotics, except that they are much trickier to use effectively, and their targets much harder to kill and much more adept at rapidly developing drug resistance. Flooding underdeveloped countries in Africa and elsewhere with these new anti-AIDS cocktails virtually guarantees that they will be used incompetently, failing to save the patients who take them, but powerfully expediting the evolution of drug-resistant strains of the AIDS virus. And if other potent viruses turn out to be vulnerable to the same or similar drugs, then they, too, may well develop resistance as a result of the drugs' widespread casual use.
All of this would be merely tragic if AIDS were a disease like, say, malaria--that is, one whose spread is extremely difficult to prevent, and against which any medical weapon, however weak and temporary, is still better than hopeless dispair. But AIDS is ridiculously easy to prevent (especially for those of us whose personalities provide a kind of natural resistance to transmission, but for others as well). Before the age of widespread antibiotics, in fact, nobody would ever have thought of treating an epidemic of what was at the time called "venereal disease"--even a fatal one--as a matter of helpless victims being cut down by the thousands or millions. Back then, public health programs that spread the word about the risks of unsafe sex, partner-tracing to notify potentially unaware infectees, and, yes, social stigma kept incurable sexually transmitted diseases more or less under control in Western countries.
These days, of course, attitudes towards STDs have softened considerably, with the result that their spread has been much more poorly contained than in the past. Still, a robust health care infrastructure, including the widespread availability of testing, condoms, STD information and medical guidance, has mitigated the overall deterioration in vigilance, and reduced the danger of antiviral drug misuse. (The high price of the drugs has, in a perverse way, helped as well.)
In countries where such infrastructure is lacking, though, a medication-based capaign against AIDS is not only doomed to costly failure, but may well end up jeopardizing far more lives than it saves, in the long run, by blunting the only weapons available against the disease's onslaught. Certainly, it cannot substitute for the basic hygiene practices that these societies simply must adopt if they are to have any hope of recovering from the crushing medical disaster that has befallen them.
Now I'm no medical expert--and I invite knowledgeable readers to correct me if I'm wrong--but this whole plan sounds to me like a recipe for utter disaster. As I understand it, antiviral medications are somewhat like antibiotics, except that they are much trickier to use effectively, and their targets much harder to kill and much more adept at rapidly developing drug resistance. Flooding underdeveloped countries in Africa and elsewhere with these new anti-AIDS cocktails virtually guarantees that they will be used incompetently, failing to save the patients who take them, but powerfully expediting the evolution of drug-resistant strains of the AIDS virus. And if other potent viruses turn out to be vulnerable to the same or similar drugs, then they, too, may well develop resistance as a result of the drugs' widespread casual use.
All of this would be merely tragic if AIDS were a disease like, say, malaria--that is, one whose spread is extremely difficult to prevent, and against which any medical weapon, however weak and temporary, is still better than hopeless dispair. But AIDS is ridiculously easy to prevent (especially for those of us whose personalities provide a kind of natural resistance to transmission, but for others as well). Before the age of widespread antibiotics, in fact, nobody would ever have thought of treating an epidemic of what was at the time called "venereal disease"--even a fatal one--as a matter of helpless victims being cut down by the thousands or millions. Back then, public health programs that spread the word about the risks of unsafe sex, partner-tracing to notify potentially unaware infectees, and, yes, social stigma kept incurable sexually transmitted diseases more or less under control in Western countries.
These days, of course, attitudes towards STDs have softened considerably, with the result that their spread has been much more poorly contained than in the past. Still, a robust health care infrastructure, including the widespread availability of testing, condoms, STD information and medical guidance, has mitigated the overall deterioration in vigilance, and reduced the danger of antiviral drug misuse. (The high price of the drugs has, in a perverse way, helped as well.)
In countries where such infrastructure is lacking, though, a medication-based capaign against AIDS is not only doomed to costly failure, but may well end up jeopardizing far more lives than it saves, in the long run, by blunting the only weapons available against the disease's onslaught. Certainly, it cannot substitute for the basic hygiene practices that these societies simply must adopt if they are to have any hope of recovering from the crushing medical disaster that has befallen them.
Wednesday, February 05, 2003
Paul Krugman, in a rare burst of sanity, has joined Gregg Easterbrook in calling for the end of the manned space program. On the other side, innumerable commentators, such as Glenn Reynolds, Charles Krauthammer, E. J. Dionne, and a parade of NASA veterans such as Jay Buckey and Buzz Aldrin, have endorsed the continuation of manned space missions, using woolly justifications like "the spirit of adventure", "mankind's destiny", and so on.
Now, I don't necessarily agree with these space travel enthusiasts, but let us suppose for a moment that I supported their goals, at least in the long run. It doesn't necessarily follow that I'd consider the best way to advance their cause right now to be continued launching of manned missions at all costs. After all, any such efforts today would involve only known, well-understood technology, and are more likely to lose public support, by failing spectacularly, than to garner votes by serving as a platform for performing the standard, tired "stupid weightless astronaut tricks" in front of a TV camera.
Real progress towards space travel would require first solving some of the daunting problems that make long-term human habitation of space so difficult. Here, for example, off the top of my head, are three avenues of scientific research, enormous progress in each of which is absolutely essential for any large-scale manned space travel to be possible. Moreover, they can all be pursued today without the cost and risk of actual manned space missions, and they also possess enormous potential for beneficial technological spin-offs quite apart from their space travel applications.
(Mostly) self-sustaining environments. The failure of the Biosphere II project demonstrates that even under the most ideal circumstances, and even given huge resources, creating an environment capable of sustaining humans with no input of resources from outside (apart from energy) is exceedingly difficult. Even inventing better technologies for partially recycling life-essential materials would not only make space travel more plausible, but may also be profitably applicable to other settings where resupply is difficult and costly, such as submarines, desert or (ant)arctic stations, and drilling platforms at sea.
Energy-harvesting. Solar energy conversion is obviously already an established research field, as is geothermal energy. In space, still other forms of energy, such as "solar wind", may conceivably be exploitable, not only by future manned missions, but by unmanned spaceships and satellites as well.
Large-scale upgradeable systems. Today's aircraft industry builds extremely complex, highly reliable systems that must be maintained over many years. Their solution is to spend enormous effort creating and thoroughly testing a single modular design, keeping it unchanged over the lifetime of the system by replacing worn parts with identical substitutes. The downside of this solution is that it does not accommodate upgrades, such as technological improvements, very well at all. Spacecraft involved in long missions may not have the luxury of airline-style maintenance practices, and need to be designed with many more contingencies in mind than the average aircraft faces. Under those circumstances, a more variable, upgradeable design may be necessary--without losing any reliability.
I predict that if these (and a few other) fundamental research problems are successfully addressed, then the advances in "core" space travel technology--rockets, orbital vehicles, and so on--needed to make space stations, moon bases and even mars missions possible will be a relatively simple afterthought.
Now, I don't necessarily agree with these space travel enthusiasts, but let us suppose for a moment that I supported their goals, at least in the long run. It doesn't necessarily follow that I'd consider the best way to advance their cause right now to be continued launching of manned missions at all costs. After all, any such efforts today would involve only known, well-understood technology, and are more likely to lose public support, by failing spectacularly, than to garner votes by serving as a platform for performing the standard, tired "stupid weightless astronaut tricks" in front of a TV camera.
Real progress towards space travel would require first solving some of the daunting problems that make long-term human habitation of space so difficult. Here, for example, off the top of my head, are three avenues of scientific research, enormous progress in each of which is absolutely essential for any large-scale manned space travel to be possible. Moreover, they can all be pursued today without the cost and risk of actual manned space missions, and they also possess enormous potential for beneficial technological spin-offs quite apart from their space travel applications.
I predict that if these (and a few other) fundamental research problems are successfully addressed, then the advances in "core" space travel technology--rockets, orbital vehicles, and so on--needed to make space stations, moon bases and even mars missions possible will be a relatively simple afterthought.
A notorious Muslim cleric in Britain has described the destruction of the space shuttle Columbia as "a punishment from God". And of course, some Palestinians celebrated the event, suggesting that the shuttle's real mission was to spy on Arab and Muslim nations.
It has apparently not (yet) been rumored that the Israeli astronaut aboard was warned of the crash in advance, and didn't show up for work on Saturday morning.
It has apparently not (yet) been rumored that the Israeli astronaut aboard was warned of the crash in advance, and didn't show up for work on Saturday morning.
Tuesday, February 04, 2003
Monday, February 03, 2003
As an experiment, I have introduced a new "comments" feature. Readers are invited to pipe up with their responses to any of my postings by clicking on the "comments" link underneath it. If the volume of replies (or the feedback, via either comments or email) suggests that readers appreciate the feature, then I'll keep it around; otherwise, I'll return to my previous practice of shouting incoherently into a silent, passive cyberspace. It's up to you, folks....
Oxblog's Josh Chafetz dismisses the significance of the latest match between chess champion Garry Kasparov and a computer. "Chess is a fully self-contained world, with a fairly simple set of rules," he writes. "The day is still, I think, a long way off that computers will be able competently to navigate the real world, because the real world does not have a set of easily understandable rules."
Actually, computers are perfectly capable of navigating the real world competently. I've never seen one trip and fall, for example, or injure itself accidentally. Computers are rather passive, sedentary creatures, of course--they don't tend to move, being content to let others place them where they please. But does that make them incompetent?
The answer, of course, is that it depends on the rules of the game. There is currently a large number of well-defined "games", such as go, face recognition, or basketball, at which humans are still much better than machines. But we can easily conceive of the day, however far off, when we might lose to the world's best robot/computer at just about any such well-defined game, including these three. Our natural reaction is therefore to come up with games (such as "navigating the real world", "appreciating a rose or a symphony", or "pondering the meaning of life") for which the rules are sufficiently ill-defined that we can smugly declare ourselves the winners. The point of these nebulous "games", in truth, is that to "win" at them is simply to be human, and since a computer will never be a human, we thus win by default.
The fallacy here is not underestimation of the potential of technology, but rather overestimation of the indefinability of human nature. We assume that because we have virtually no understanding of the "rules" by which we play our various human games, the rules are therefore undefinable. In fact, we may be far more well-defined than we realize, behaving according to a set of rules that are encoded in our genes and expressed in the structure of our nervous systems, and can in principle be programmed just as easily into an appropriately designed computer.
Of course, the games we play are not necessarily the ones we would want a computer to play. Do we really want to build computers that--following the rules of our own "game"--overeat, stupefy themselves with mood-altering substances, slack off at their jobs, get into petty fights with each other, and routinely make careless errors? Obviously not; why on earth would we build a machine to do what we do so naturally, all by ourselves?
No, we would inevitably choose to build our computers to do what we can't do, such as work tirelessly and without error at some immensely taxing yet deadly dull cognitive task--like, say, generating and evaluating millions of chess moves. And that is, in fact, precisely what the designers of Garry Kasparov's computer opponent have done. They have not, it is true, created a human. But if that had been their goal, then why would they have bothered to tinker with computers at all, when the old-fashioned way is surely faster, cheaper and more fun?
Actually, computers are perfectly capable of navigating the real world competently. I've never seen one trip and fall, for example, or injure itself accidentally. Computers are rather passive, sedentary creatures, of course--they don't tend to move, being content to let others place them where they please. But does that make them incompetent?
The answer, of course, is that it depends on the rules of the game. There is currently a large number of well-defined "games", such as go, face recognition, or basketball, at which humans are still much better than machines. But we can easily conceive of the day, however far off, when we might lose to the world's best robot/computer at just about any such well-defined game, including these three. Our natural reaction is therefore to come up with games (such as "navigating the real world", "appreciating a rose or a symphony", or "pondering the meaning of life") for which the rules are sufficiently ill-defined that we can smugly declare ourselves the winners. The point of these nebulous "games", in truth, is that to "win" at them is simply to be human, and since a computer will never be a human, we thus win by default.
The fallacy here is not underestimation of the potential of technology, but rather overestimation of the indefinability of human nature. We assume that because we have virtually no understanding of the "rules" by which we play our various human games, the rules are therefore undefinable. In fact, we may be far more well-defined than we realize, behaving according to a set of rules that are encoded in our genes and expressed in the structure of our nervous systems, and can in principle be programmed just as easily into an appropriately designed computer.
Of course, the games we play are not necessarily the ones we would want a computer to play. Do we really want to build computers that--following the rules of our own "game"--overeat, stupefy themselves with mood-altering substances, slack off at their jobs, get into petty fights with each other, and routinely make careless errors? Obviously not; why on earth would we build a machine to do what we do so naturally, all by ourselves?
No, we would inevitably choose to build our computers to do what we can't do, such as work tirelessly and without error at some immensely taxing yet deadly dull cognitive task--like, say, generating and evaluating millions of chess moves. And that is, in fact, precisely what the designers of Garry Kasparov's computer opponent have done. They have not, it is true, created a human. But if that had been their goal, then why would they have bothered to tinker with computers at all, when the old-fashioned way is surely faster, cheaper and more fun?
Sunday, February 02, 2003
The explosion of the space shuttle Columbia has resulted in the exhumation (by Mickey Kaus, for instance, and Andrew Sullivan) of Gregg Easterbrook's 1980 Washington Monthly article on the shuttle program. Easterbrook, with seeming prescience, pointed out a number of risks inherent in the shuttle design, including its inability to withstand either a failure of the solid rocket boosters (as in the 1986 Challenger disaster) or damage to the heat-shielding tiles that are now being considered as a possible cause of the Columbia's demise.
The article's supposed prescience is actually seriously overstated; Easterbrook was in fact merely playing the easy game of listing the known, understood risks and problems with a new technology and then snorting, "it'll never work". For example, he never mentioned anything at all about the possibility of a leaking solid rocket booster (that is, of a massive jet of flame shooting out the side of the booster, igniting the main fuel tank). Rather, he concerned himself with the more prosaic risk of a booster simply cutting out in flight. (No such failure has occurred to date.) Easterbrook also "anticipated" the extreme difficulty of maneuvering the cumbersome shuttle to a safe landing without test flights, wondered whether the main engine, which kept blowing up during test firings, could ever be made reliable, and noted "residual doubt" about whether the heat-shielding tiles "can be relied on at all." The shuttle, he wrote, is "several years behind schedule, with no imminent prospect, despite official assurances, that it will fly at all." Columbia's first launch was about a year after his article's publication.
Fortunately, Easterbrook's recent reaction to the Columbia crash avoids technical gripes and unjustified "I-told-you-so"'s, and concentrates instead on the real strength of his earlier article: its critique of the space shuttle project as a failure of science and technology policy. On these matters, Easterbrook is generally quite sensible: the space shuttle and its companion project, the space station, are far too expensive for their very limited payoffs. For commercial and military tasks such as satellite launches, disposable rockets are far cheaper and more dependable, and for scientific research, unmanned missions are much more efficient. The whole space program desperately needs to be rethought in terms of its specific goals and the most cost-effective means to achieve them, and it's almost certain that neither the shuttle nor the space station could survive such a rethinking.
The worst outcome of the Columbia tragedy would be a technical witch-hunt that seeks out the people and parts responsible for this particular failure, then metes out punishments and effects repairs, ignoring the more fundamental doubts that Easterbrook and others have raised about the entire program. Unfortunately, the path of least resistance for any journalist right now is to indulge in "unheeded warnings" blame-mongering--not to risk irritating audiences with the unsettling suggestion that the lives of seven brave astronauts may just have been wasted on a completely pointless mission.
The article's supposed prescience is actually seriously overstated; Easterbrook was in fact merely playing the easy game of listing the known, understood risks and problems with a new technology and then snorting, "it'll never work". For example, he never mentioned anything at all about the possibility of a leaking solid rocket booster (that is, of a massive jet of flame shooting out the side of the booster, igniting the main fuel tank). Rather, he concerned himself with the more prosaic risk of a booster simply cutting out in flight. (No such failure has occurred to date.) Easterbrook also "anticipated" the extreme difficulty of maneuvering the cumbersome shuttle to a safe landing without test flights, wondered whether the main engine, which kept blowing up during test firings, could ever be made reliable, and noted "residual doubt" about whether the heat-shielding tiles "can be relied on at all." The shuttle, he wrote, is "several years behind schedule, with no imminent prospect, despite official assurances, that it will fly at all." Columbia's first launch was about a year after his article's publication.
Fortunately, Easterbrook's recent reaction to the Columbia crash avoids technical gripes and unjustified "I-told-you-so"'s, and concentrates instead on the real strength of his earlier article: its critique of the space shuttle project as a failure of science and technology policy. On these matters, Easterbrook is generally quite sensible: the space shuttle and its companion project, the space station, are far too expensive for their very limited payoffs. For commercial and military tasks such as satellite launches, disposable rockets are far cheaper and more dependable, and for scientific research, unmanned missions are much more efficient. The whole space program desperately needs to be rethought in terms of its specific goals and the most cost-effective means to achieve them, and it's almost certain that neither the shuttle nor the space station could survive such a rethinking.
The worst outcome of the Columbia tragedy would be a technical witch-hunt that seeks out the people and parts responsible for this particular failure, then metes out punishments and effects repairs, ignoring the more fundamental doubts that Easterbrook and others have raised about the entire program. Unfortunately, the path of least resistance for any journalist right now is to indulge in "unheeded warnings" blame-mongering--not to risk irritating audiences with the unsettling suggestion that the lives of seven brave astronauts may just have been wasted on a completely pointless mission.
Saturday, February 01, 2003
Those of you whose thought processes are as twisted as mine will be relieved to learn that there have actually been six previous astronauts "from Jewish families". Moreover, at least two, David Wolf and Jeffrey Hoffman, with eight successful shuttle flights between them, have even spoken publicly about their identities as Jewish space travellers. So while 100% of shuttle disasters have involved a conspicuously Jewish crew member, it is not necessarily fatal to have a Jew aboard your spacecraft.
One takes one's solace where one can, I suppose.
One takes one's solace where one can, I suppose.
Subscribe to:
Posts (Atom)