Friday, December 27, 2002

Few debates in modern society are so rife with deep confusion over principles and motives as the debate over the relationship between sexual conduct and the state. Consider, for instance, Tim Noah's article in Slate on President Bush's recent executive order allowing federal "faith-based initiative" funding to religious institutions that "discriminate on the basis of religion". The order was designed to allow religious charitable organizations to preferentially hire members of their own faith without jeopardizing their access to federal grants. This permission has both obvious justifications and worrisome implications, but Noah, who generally accepts the idea, focuses on one of the latter in particular: that once the power to judge "membership" is granted, "other forms of discrimination can slip in the back door". His example: a Methodist children's home "decreed that homosexuals and unmarried heterosexuals who engage in premarital sex are....not Methodists—or at least not sufficiently Methodist to be eligible for employment there."

Now, the "discrimination" case for granting full, unconditional equality to homosexuals is a bit like the "perjury" case for impeaching former president Clinton: it's really a moral argument about sex masquerading as a legal argument about rights. In its most reasonable form, it targets an underlying weakness in most claimed justifications for singling out homosexuality (uniquely) for opprobrium: they apply equally to other traditionally shunned forms of sex, but are rarely (or at least rarely vocally) invoked against, say, extramarital sex in general. Casual heterosexual sex is also, after all, a sin in the major monotheistic religions; it undermines marriage and family just as much (or as little, depending on your point of view) as homosexuality; and when practiced promiscuously, has comparably disturbing public health consequences. (It's no surprise that the "gay liberation" movement followed directly on the heels of the sexual revolution; the anti-traditionalist, sexually libertinistic principles embraced by the latter lead easily and naturally to an embrace of the former.) Singling out homosexuality and homosexuals for moral condemnation while remaining silent on other deviations from traditionalist sexual norms is thus at least somewhat hypocritical, and gay rights proponents (such as Noah) thus implicitly invoke the sexual laissez-faire spirit of the times when arguing for gays' right to "equality".

But the Methodists Noah denounces, in effect, as bigots are not vulnerable to such an argument. They endorse a completely traditional view of (all) sex that, while possibly not attractive to Tim Noah, has the advantage of being strictly faithful to its own religious teaching, completely consistent in its embrace of a particular ideal of family structure, and even defensible as a public health strategy. To argue that it is unjustly discriminatory is thus to argue the same of virtually all marriage and family law, all religious (or other) moral teaching about sex, and, for that matter, all denunciations of promiscuity (even, for example, on public health grounds). If Noah actually embraces absolute sexual and domestic libertinism so completely that any deviation from it amounts, in his eyes, to bigotry, then it is he, not the Methodist children's home, that has adopted a rigid and intolerant fundamentalism that brooks no dissent in its desire to impose its own uniform moral vision on all of society.

It's hard to believe, though, that Noah really means to suggest that religious denominations, to avoid discrimination, must be utterly silent on matters of sexual morality. More likely, he, like so many who discuss this topic, is simply confused.

Wednesday, December 25, 2002

Nick Kristof, in the New York Times, and Wendy Sherman, in the Washington Post, have both written columns this past week urging the Bush administration to "negotiate" with North Korea. It's not at all clear what the verb even means in the context of North Korea, a nation that has consistently ignored any and all commitments it has made in the course of its past "negotiations" with Western countries. As far as anyone can tell, simply airdropping food, money and leaflets proclaiming friendship over Pyongyang would be faster, easier and no less effective than attempting to engage in some kind of empty diplomatic charade with the government headquartered there. The strongest argument Kristof can come up with in favor of opening up a dialog is that the current policy of sanctions and isolation is "not working". But it's hard for him to make that claim when he can't point to any goal that negotiations could plausibly achieve that is not already being accomplished by today's policy of what amounts to irritated neglect.

Sherman, however, makes a much more relevant point: The South Korean government and public currently both support (however foolishly) a policy of engagement. And given that South Korea is the main "front-line" state targeted by the North, and a strong US ally to boot, any American stance more hardline than Seoul's would not only stir South Korean resentment (Sherman's primary, largely irrelevant concern) but, more importanly, end up being undercut by South Korean accommodationism. A policy of isolation, for instance, is pretty difficult to enforce if the South is disinclined to participate in its implementation.

The US has been in these shoes before, of course--most notably at various moments during the Cold War, when Western Europe distanced itself from more hawkish American administrations trying to maintain a united front of firmness against the Soviet Union. (Non-Israeli skeptics of the late stages of the Israeli-Palestinian "peace process" were in a similarly difficult position during Ehud Barak's prime ministership, when he started making huge unilateral concessions in the hopes of winning reciprocation from Arafat. Who were we, after all, to second-guess the nation that stood to suffer the most serious consequences of any error in judgment?)

But the lesson of these past encounters with crazy regimes is clear: totalitarian leaders, when offered concessions, invariably overplay their hands, and healthy democracies, after a period of foolish flirtation, always come around--eventually--and respond by toughening up their stances. In the case of the Korean peninsula, where Pyongyang has never been shy about directly provoking the South with pointless acts of shocking aggression, the pattern is particularly dependable. The US should quietly agree to follow Seoul's lead regarding North-South relations, confident that sooner rather than later, America's view will be vindicated, and its South Korean partner will willingly join it in applying sustained, vigorous pressure (to the extent possible) on the maniacal regime to the north.

Tuesday, December 24, 2002

Even in this age of murderous fanatics and craven appeasers, some examples of moral confusion are so incomprehensibly extreme that they stretch one's conception of the range of human folly. Such is journalism professor Ted Gup's rambling paean, in the Washington Post, to the free speech rights of a rabid anti-Semite.

Confronted with a Cleveland restaurateur who has adorned the wall of his deli with a mural depicting a whole panoply of hate-filled lies about Jews ("Jews as monkeys wearing yarmulkes", "a Jewish conspiracy in control of American network television"), Gup nevertheless expresses thanks for the absolute right of free speech. Understandable enough, one thinks as one reads; such laws allow Gup to recognize and avoid despicable bigots like proprietor Brahim Ayad, to organize protests against them, and to be forewarned of their potential for violence. But no, Gup sees more direct social benefits, as well: "Hate speech need not be a precursor to violence. On the contrary, it can defuse tensions that could turn explosive." Ayad's murals "invite the possibility, however slim, that we might find some sliver of common ground, that confrontation could lead to conciliation".

Gup visits Ayad's shop, and finds him "courtly, soft-spoken and oddly vulnerable", despite the hate literature he hands out and the conspiracy theories he spouts. (Bigots are people too, after all.) He may be an anti-Semite, but he appears quite friendly with the local black population--"if he is a bigot he is most selective", writes Gup, approvingly. The lessons of pride and self-confidence Ayad teaches to his eight children are "[n]ot so different from what I tell my own sons." (The scary thing is, he might be right.) Gup draws a contrast between this disturbingly cordial chat and his childhood experiences as a Jew growing up in the stuffy, quietly discriminatory Midwest of the 1950's, "when prejudice was vented only in whispers between like minds." Dealing with "prejudice [that] was hidden behind disingenuous smiles" convinced him that "a silenced bigot can do far more mischief than one who airs his hatred publicly."

It's hard to know what to make of Gup's ludicrous misinterpretation of these past and present brushes with racism. Perhaps his free-speech dogma has tied him in rhetorical knots; perhaps he was taken in by Ayad's friendly manner; or perhaps he's just dense. But one thing is clear: the secretive bigots of Gup's youth were not more dangerous than the overt, flagrant anti-Semites of today for their having been suppressed. In fact, they were not legally suppressed at all; to the extent that free speech rights were interpreted less expansively in the 1950's than today, they hardly afforded less protection to open expressions of racism. On the contrary, most forms of bigoted expression were far freer from formal restraints back then than they are today--and were accompanied by much more overt harm, including violence, towards their targets. While Gup was enduring muted slights in Canton, Ohio, for example, blacks in the American South were being openly subjected to vicious slurs on a routine basis--and certainly had no reason to feel more safe as a result.

No, what confined anti-Semitism behind disingenuous smiles in the 1950s was, in a word, shame--the widespread recognition that racial and ethnic prejudices, even fairly common ones, were nonetheless considered in many quarters to be immoral. Likewise, what protects Ted Gup today from monsters like Brahim Ayad is not Ayad's unfettered freedom, but rather his inability to win popular acceptance for his depraved blood libels. And if, God forbid, his despicable views should gain widespread credence (as they have, for instance, in the terrorist-breeding towns of the West Bank and Gaza Strip), then the First Amendment will hardly protect Gup from the inevitable ensuing brutality. Moreover, the path from here to there, should it end up being traveled, will have been paved by the warm indulgence shown the likes of Brahim Ayad by the likes of Ted Gup.

There are, it is true, serious, plausible arguments for eschewing legal restrictions on hate speech: the difficulty of distinguishing it from more defensible opinions; the value of being able to gauge its popularity openly, and counter it vigorously as necessary; the patina of glamor that inevitably attaches to anything banned. But none of these contradict the simple fact that hate speech is unambiguously evil. It encourages and cultivates violence and other forms of direct harm, and should be shunned and condemned without reservation or mitigation--even when promulgated by a jovial deli owner in Cleveland. Shame on Ted Gup for losing sight of this glaringly obvious truth.

Sunday, December 22, 2002

Andrew Sullivan has collected nearly $80,000 via his blog with his "pledge week", Glenn "Instapundit" Reynolds an undisclosed (but presumably somewhat lesser) sum with his "tip jar". Such fundraising has actually become quite common among bloggers. At least one, though, has questioned the seemliness of articulate, employable (and usually lucratively employed) amateur diarists engaging in what amounts to electronic panhandling.

Here at ICBW, of course, we have no need for any such shenanigans, being in possession of a multi-million-dollar trust fund donated by a recently-deposed African strongman, who happened to share our strong views on the evils of judicial overreaching. Unfortunately, the money is currently locked up in an escrow account in Lagos, Nigeria; readers interested in assisting us in transferring it to a more accessible locale (in return for a hefty percentage of the proceeds) are encouraged to email us for further instructions. (Or you can just drop us an email to let us know you enjoy reading the blog, or to tell us to knock it off with that annoying royal "we".)

Saturday, December 21, 2002

The Wall Street Journal's recent editorial expressing concern that too many non-affluent Americans may be playing too little in taxes--and hence lacking in tax-cutting passion--has understandably come under fire from liberals like Tim Noah, E.J. Dionne, and (of course) Paul Krugman. They quibble with the math (low-income people pay lots of tax, even if most of it is not income tax), condemn the mean-spiritedness of the idea, mock the irony of Republican "class warfare", and write ominously of the potential for this "meme" to take off, with predictably awful consequences.

What they (again, understandably) seem to have missed is the bigger picture: the obvious sheer desperation infecting ideologues like the WSJ editorialists that has them actually hoping for a major political setback, just so that they can recover their lost feeling of gaining momentum and making progress. The simple fact is that anti-tax fervor has won in America; it is a universally accepted conventional wisdom, it can command automatic, overwhelming political support, and no politician dares raise a firm voice against it. In other words, it has peaked, and has nowhere to go but down.

Over time, segments of the public that once reflexively opposed taxes will inevitably start noticing attractive governmental opportunities missed as a result of this dogma, and anti-tax zealots will find themselves fighting rearguard holding actions against popular measures to raise revenues for particular widely-supported government expenditures (or simply to preserve popular current ones, in the face of mounting deficits). Tax-cutters will at that point have morphed from impassioned revolutionaries into beleaguered defenders of the status quo, just like the tax-and-spend liberals whose barricades they once stormed. The wheel will have come full circle.

In their hearts, I believe that the "preserve the poor's tax burden" set subconsciously understand their problem; they dread their inevitable fate, and are frantically trying to preserve the last vestiges of their beloved insurgent status. Their efforts, of course, are doomed to failure.

Thursday, December 19, 2002

I have often described the modern liberal arts university as an institution suffering from a lack of a substantial social purpose to serve. But that purpose (at least in the case of the elite schools) may have been found at last. David Brooks, the perspicacious anthropologist of modern society who unearthed Patio Man, has turned his attention to academia, and the result is a fascinating portrait of today's top tier of college students.

On the plus side, these students are extremely hardworking, ambitious, industrious, mature, self-confident overachievers, a gender-integrated version of the famous Japanese "salaryman" cohort that led that country's economy to such heights in the seventies and eighties. They run themselves ragged pursuing opportunities, organizing activities and advancing their careers, caring little for frivolities like dating and romance. Their studies are in fact just a sideline to their resume-padding extracurricular pursuits. "It is through activities that students find the fields they enjoy and the talents they possess, writes Brooks. "The activities, rather than the courses, seem to serve as precursors to their future lives."

Brooks sees some less positive aspects in this culture, though. The students he observed are not really intellectually engaged, or even unusually intelligent; they have instead been selected for their having "master[ed] the method of being a good student", and treat excelling in their studies as another task to be accomplished with their trademark diligence. Moreover, the "tyranny of the grade point average" stifles specific passion in favor of bland generalism, and "punishes eccentricity". Ambitions are narrowly focused on the few professions that offer well-trodden, steeply upward-inclining career paths, at the expense of all sorts of unusual or creative possibilities. As a result, these students lack "any clear sense of what their life mission is", or even "of what real world career paths look like".

These would be very serious criticisms indeed, if a large fraction of college-age youth ended up in such a setting. As a forge for America's leadership and managerial class, though, an environment that directly cultivates hands-on leadership and management (not to mention workoholic devotion to accomplishment) doesn't sound bad. (Societies--even highly successful ones--have certainly done worse; think of the English "public school" culture of institutionalized physical brutality that molded the functionaries of the British Empire.) America does not lack its share of independent-minded visionaries; developing a corps of hardworking careerists devoted to making things happen, on the other hand, would at least constitute a tangible contribution the nation's success. And it's about time the country's top liberal-arts colleges finally returned to providing one.

Wednesday, December 18, 2002

Chris Bertram, a.k.a. "Junius", notes that despite Europe's reluctance to accept Turkey as a member of the EU, the Turkish political system follows European models in many respects. Jacob Levy concurs, adding that two particular European complaints about Turkey--its suppression of religious expression and of linguistic minorities--have strong precedents in French history.

Well, Levy is a political scientist, and Bertram is a philosopher, so I suppose they can be forgiven their unworldliness. And I'm not entirely unsympathetic to their criticisms of European snobbery about Turkey. But shocked though these two academic theorists may be to hear it, a nation's official political structure does not necessarily define its politics. I strongly suspect that Europeans are in fact not afraid that Turkey might become too aggressively secular, but rather that a strain of fundamentalist Islam is brewing just beneath the officially secular surface. Similarly, they are not concerned about a Turkey too strongly unified by strict enforcement of a single cultural norm, but rather about a country riven by ruthlessly suppressed ethnic strife that may continue to erupt periodically into dangerously destabilizing episodes of violence for decades to come.

A more interesting question, raised tangentially by Bertram, is what, precisely, might qualify Turkey (or any country, for that matter) as "European", if Turkey's current condition does not. While obviously any such definition is bound to be somewhat imprecise, I propose the following list of promising signs:
  • The presence of a large "internationalized" class who view their home country as a reactionary backwater, and an integrated, forward-looking Europe, the epitome of civilization, as its potential savior;

  • Widespread enthusiasm for an expansive, dirigiste welfare state;

  • A heavily state-subsidized "cultural sector" that produces dreary, unpopular content;

  • Frequent public displays of virulent hatred of the US and Israel.
Once Turkey presents the above symptoms, it will already be effectively European, and rapid finalization of its membership in the EU will then no doubt be a mere formality.

Monday, December 16, 2002

Dahlia Lithwick and Mark Kleiman have both noted approvingly the effect of Clarence Thomas' presence on the Supreme Court's hearing of the recent "cross burning" First Amendment case, Virginia v. Black. Thomas, a black justice of little jurisprudential distinction who was almost certainly nominated at least in part because of his race, and who rarely speaks during oral arguments, chose the occasion of this particular case to interject some passionate remarks about the meaning of the burning cross as a symbol of the Ku Klux Klan's long history of absolute, violently-wielded power over much of the American South. Lithwick concludes from the episode that "justice can only be done when judges representing the most diverse experiences and backgrounds sit on the nation's courts." Kleiman observes that "Thomas's intervention showed that race remains relevant to the exercise of the office of judging in the United States," and implies (though he demurs from asserting it as his opinion) that the de facto racial preference that brought Thomas to the Supreme Court might be seen to have benefited the court in this instance.

Now, as a non-conservative who originally opposed Thomas' nomination on the grounds that he was a mediocre judge nominated for purely political reasons (only to change my position when opposition to his appointment in the wake of the Anita Hill brouhaha suddenly became the more overtly political stance), I find it ironic that folks like Lithwick and Kleiman, who most likely gnash their teeth in frustrated outrage at 98 percent of Justice Thomas' opinions, would concentrate on this one case as evidence that selecting Thomas for the court might have had an overall positive effect. But even in this instance, it's worth examining the content and consequences of Thomas' comments.

He did not, after all, say anything that enlightened the Court as to a matter of law or logic. In fact, he did not say anything that was unknown to the other members of the court, or that might not have been said just as well by any one of them. Nevertheless, Lithwick remarks that "with his personal narrative, Justice Thomas changed the terms of the legal debate," and Kleiman mentions that "[t]he same speech from Scalia or Rehnquist would have been much more surprising, and probably less persuasive". In other words, its strong impact on the justices derived completely from the fact of Thomas' race.

I will be more blunt than either Lithwick or Kleiman: Thomas was using--just as he did during the Anita Hill hearings--indignant language about America's sorry historical record of mistreating blacks as a blunt rhetorical instrument with which to overcome more reasoned arguments. And unlike the Hill hearings, which had unfortunately already been politicized through and through, oral arguments in the "cross burning" case had been quite typically dry and dispassionate until Thomas' outburst. If his words managed to change any minds, it was certainly not on the strength of their legal significance.

Now, many Americans (apparently including Lithwick, and perhaps Kleiman as well) actually think it quite proper that such blatantly emotional approaches be incorporated into the Court's reasoning--that is, that the Court is entitled to factor political, social and cultural sentiments into its decisions. But if so, then the Court is just another political body--the Senate with fewer members and no elections; and whether it happens to be one-ninth black is only one of a very, very long list of discomfiting questions that deserve to be asked about its political representativeness. (As Lithwick herself concedes, "how much identity politics can you fit on the head of nine pins?")

Conversely, if there is any meaning or legitimacy to the Supreme Court's being an appointed legal body, rather than an elected, political one, then Thomas' little speech was completely inappropriate for it. Had it been delivered to the Virginia State Legislature debating the ban on cross-burning that was at issue in the case, it would have been entirely fitting; in the courtroom, it was yet another demonstration that Clarence Thomas was a poor choice for his seat on the court.

(As an aside, I actually share Thomas' views, Constitutional and political, on the cross-burning issue, although I'm much more careful to separate the two. I believe that the First Amendment exists to protect the democratic process, and thus properly applies only to the lifeblood of democracy: written or spoken speech. Symbolic gestures of any kind--from flag-burning to cross-burning--are unnecessary to the orderly conduct of the democratic process, and are therefore open to regulation without fear of collapse into tyranny. The expansion of the "free speech" clause to cover them--and much more--is, in my view, part and parcel of the Constitution's embarrassing and unwise elevation from pragmatic guardian of democracy into elaborately interpreted quasi-religious tract. Noxious speech may require tolerance in the name of an open political process; but noxious symbols and gestures are as likely to undermine that process as to enhance it.

In the case of cross-burning, I also happen to agree with Thomas and the Virginia legislators--as a political matter--that cross-burning is a despicable enough action to deserve a ban. I personally feel much less strongly about flag-burning, being Canadian and all; but I consider the claim that these two literally incendiary gestures are Constitutionally different to be utterly incompatible with any notion of Constitutionality that rises above individual political and cultural sentiment--of the kind to which Justice Thomas was so nakedly appealing.)

Saturday, December 14, 2002

Now that Henry Kissinger has resigned as co-chair of the "independent" investigation into the 9/11 terrorist attacks, it appears that some other elder statesman with a gravelly voice will have to be found to deliver the stunning news: that pretty much everyone in the country was unprepared for what happened, and that the very few people who suspected something along those lines might be in the offing were, in retrospect, much, much more right than anyone realized at the time. (Eugene and Sasha Volokh's father suggests Tom Clancy as chair, but I expect the latter is more adept at writing spectacular fiction than pedestrian truth.)

Independent commissions of inquiry have many purposes, a few of them even legitimate. Probably their most respectable role is to act as official codifier and publicizer of some already-well-established set of facts being challenged by conspiracy theorists or generic wackos. (The Warren Commission was the prototype; it may not have been as successful as it had hoped to be, but think how wild and, uh, imaginative the Kennedy assassination conspiracy industry could have been without its careful establishment of a basic set of facts and evidence on the event.)

Finger-pointing commissions, on the other hand, are much less effective; it's much easier to believe that an august panel of dignitaries has misplaced the blame for some fiasco than that they've deliberately fabricated or covered up hard evidence. If it's politically or emotionally expedient for some people to reject the 9/11 commission's conclusion that it was, say, all Larry King's fault for failing to book Steven Emerson often enough, then they will.

Besides, how on earth can "blame" for failure to anticipate 9/11 possibly be calculated? When an event is unanticipated, that means that most people have adopted a worldview that rates such an event as highly unlikely. Now that the event has happened, of course, it's possible that their calculation of the likelihood of such a threat was grossly--even negligently--mistaken. It's also possible, though, that their calculation was completely correct. For all we know, the September 11th attack was a wildly improbable "success" that depended on numerous strokes of unbelievable luck on the terrorists' part (say, multiple near-compromises of the plot that somehow were just barely avoided). It's also possible, for all we know, that an attack viewed at the time as far more likely (say, a missile launch by a "rogue state") happened to have been derailed around the same time by some remarkable fluke of good fortune.

Indeed, even if an al Qaida attack was the most likely threat at the time, it's still possible that no reasonable information collection and analysis strategy (that is, one that did not recognize a priori the nature of the al Qaida threat) would have identified it as such. As Slate's William Saletan pointed out a few months ago, a "pattern" of evidence comparable to the one since observed regarding the September 11th attack could also have been constructed to support a prediction of any number of other disasters that, as it turned out, never came to pass. Expecting the unexpected is a very difficult business; there's just so much out there not to expect.

That doesn't mean that the government shouldn't be hard at work seeking out, studying, investigating and evaluating threats to the country, of course. What it does mean, however, is that a thorough examination of the process that missed the last threat is not likely to be particularly fruitful. (It could, for instance, spawn a "terrorist threats first" approach that will miss the next, non-terrorist danger.) A more productive exercise would be a simple best-effort threat assessment, based on the best information currently available, then an enumeration of possible countermeasures, and identification of the most cost-effective of these (in terms of estimated threat reduction potential) for implementation. I see no reason to think an "independent commission of inquiry" headed by Henry Kissinger or any other grandiose luminary would be nearly as effective as, say, a Pentagon "skunkworks" team (or perhaps several, working independently) at performing this function.
Readers of the Volokh Conspiracy weblog may be puzzled by the references there to a substance called "chopped liver". I will try to explain the origins of this bizarre material.

The liver is an organ present in all higher animals; its primary function is to secrete a malodorous green substance called bile, essentially a detergent that helps dissolve fats to aid in their digestion. It also produces a cascade of other chemicals designed to neutralize various toxins; some of these toxins are also stored in the liver to protect the rest of the body from them. In some animals, the liver is actually toxic in its own right; polar bear livers, for instance, contain hazardous quantities of vitamin A. In other animals, the organ is (barely) edible, albeit with an extremely foul flavor.

However, humankind's long history of daring ingenuity and perennial food shortages has prompted many societies to attempt to exploit all manner of noxious resources--insects, worms, dirt--for nourishment. It was thus inevitable that some intrepid cultures would attempt to convert the liver from waste by-product of livestock slaughter into dietary component. And, as with so many other examples, liver originally consumed out of desperation evolved in some locales into a "delicacy", as palates trained to tolerate this revolting ingredient eventually learned to crave it. The French, for example--notorious converters of detritus into cuisine--have taken to subjecting geese to obscene tortures in the belief that these can actually render the birds' livers tasty (in fact, the forced-feeding regimen they impose only dilutes the organ's flavor somewhat, making it marginally less disgusting to eat).

Another European tradition asserts (mistakenly) that cooking the liver, mashing it, then mixing it with fat, onions and spices, can mask its nauseating flavor; hence Volokh's strangely enthusiastic references to this dish. And remember: if anyone ever asks you whether they're "chopped liver", it's only polite to assure them, in no uncertain terms, that they are not.

(Next week: "Steak and Kidney Pie -- The British Sense of Humor Strikes Again")

Thursday, December 12, 2002

Thomas Friedman, my favorite international affairs columnist, is now arguing that NATO should take control of the West Bank and the Gaza Strip, the way they did Kosovo. In fact, he claims to have suggested the idea a year ago. Of course, the analogy was absurd even back then; Kosovars weren't attempting to conquer and resettle all of Serbia, nor were they sending suicide bombers all the way to Belgrade to murder innocent civilians in support of their ambitions. But at least at that time the necessity of suppressing the "terrorist infrastructure" in the Palestinian Authority's domain with harsh military force, while obvious to many of us, hadn't yet been empirically proven. This past March, however, the Israelis moved in, and what has happened since? "Ariel Sharon has adopted a policy of hot pursuit and it has resulted in the Palestinian Authority's being destroyed and more Israelis being killed and feeling insecure than ever," writes Friedman. Does he really believe that more Israelis are being killed and feeling insecure than before Operation Defensive Shield? Does he even read his own newspaper?

Unless NATO is prepared to act as vigorously against terrorism as the Israeli army has, the bombings and killings emanating (currently at a much-reduced but still disturbing rate) from the PA's former stomping grounds would only increase under NATO rule. And the minuscule likelihood that, say, Norwegian troops would be willing to conduct dangerous and aggressive military operations against Palestinian terrorist organizations to capture "militants" and destroy munitions factories does not exactly inspire optimistic hopes for Friedman's plan. But widely celebrated international affairs pundits don't like messy, ugly situations with no elegant solutions; they're paid to be creative, and they'll propose grand, imaginative schemes, dammit, even if they have to ignore reality to do it.
You've got to hand it to Michael Kinsley; few writers can make a spectacularly flawed argument sound so spotlessly logical. His current critique, in Slate, of the practice of plea-bargaining is apparently derived from a 1978 article in the magazine Public Interest by one John Langbein, entitled, "Torture and Plea Bargaining". To quote Kinsley:
Langbein compared the modern American system of plea bargaining to the system of extracting confessions by torture in medieval Europe. In both cases, the controversial practice arose not because standards of justice were too low, but because they were too high. In medieval Europe, a conviction for murder required either two eyewitnesses or a confession by the perpetrator. This made it almost impossible to punish the crime of murder, which was an intolerable situation. So, torture developed as a way to extract the necessary confessions.

Plea bargaining evolved the same way, Langbein explained. As our official system of justice became larded with more and more protections for the accused, actually going through the process of catching, prosecuting, and convicting a criminal the official way became impossibly burdensome. So, the government offered the accused a deal: You get a lighter sentence if you save us the trouble of a trial. Or, to put it in a more sinister way: You get a heavier sentence if you insist on asserting your constitutional rights to a trial, to confront your accusers, to privacy from searches without probable cause, to avoid incriminating yourself, etc.
This analogy looks reasonable on the surface, but it is in fact wildly self-contradictory. If the modern system of justice is as "larded with....protections for the accused" as the medieval one, then how can plea-bargaining threaten to impose a "heavier sentence" on someone who is nearly guaranteed an acquittal? What is the incentive for the accused not simply to demand a trial, and obtain an exoneration?

In fact, the medieval torture analogy fits one aspect of the modern criminal justice system perfectly--just not the part that involves plea-bargaining. Plea bargains almost always involve cases in which the defendant faces a very high chance of conviction, because the evidence for guilt is overwhelming. In such cases, the defendant has a strong incentive to accept the certainty of a lighter sentence in exchange for the near-certainty of a heavier one. This is a perfectly reasonable transaction for both sides, and not even Kinsley is able to come up with a serious argument against it.

The trouble starts when the authorities have somebody whom they "know" is guilty, but whose prospects for an acquittal are large--either because the evidence is genuinely weak, or because the elaborate labyrinth of "protections for the accused" prevents the damning evidence from being used. In such cases, modern equivalents of medieval justice--from ruthless interrogation techniques to falsification of evidence to "testilying"--become extremely tempting "correctives" to the system. And once used to convict the unmistakably, horribly guilty who would otherwise go free, they become equally tempting tools for winning convictions in cases where the evidence is less clear-cut--including some where the accused later turns out to have been innocent after all.

Now, all of this fits Kinsley's/Langbein's analogy perfectly. It even explains the ostensible topic of Kinsley's column ("Why Innocent People Confess") much better than all of his grumbling about plea bargains. But the clear lesson of this analysis--that doing away with absurd Constitutional barriers to convicting the obviously guilty might actually afford the innocent greater protection, by reducing society's incentives to quietly allow convictions to be obtained by corrupt means--is terribly unappealing to someone of Kinsley's ideological pedigree. So he must instead twist a perfectly good analogy in perverse directions, extracting a lesson--plea-bargaining is bad--that makes no sense, and that benefits nobody except a lot of criminals, their defense attorneys, and their political sympathizers.

Then again, given the incoherence and perniciousness of his claim, Kinsley certainly argues it well.

Wednesday, December 11, 2002

The blogosphere (and now the real world, too) is up in arms over Senate Majority Leader Trent Lott's fond reminiscences about Strom Thurman's days as a rabidly pro-segregationist "Dixiecrat". Some commentators have expressed surprise and even alarm at the slowness with which the public outcry has arisen. Sunny optimist that I am, I view this delay as a positive sign; in effect, it demonstrates that Lott's expressed sentiment is now so far beyond the pale that everyone simply assumed he'd gotten his tongue in a knot, so to speak, and ended up blurting out something whose plain meaning he couldn't possibly have intended.

Of course (he said, returning to cynical mode), none of this has any bearing on whether Lott actually gets to keep his job or not. As I've often pointed out, a political scandal rarely has much to do with the actual offense ostensibly at the center of it; rather, it's an occasion for the protagonist's allies and opponents to do political battle, with the offender emerging unscathed, alive but bloodied, or completely destroyed, depending upon the relative strengths of the forces engaged in the melee. I have no idea how strong Lott's support is in, say, the Republican Senatorial caucus, but if it's strong enough, he could tout Osama bin Laden for president and get away with it, whereas if it's sufficiently weak, his crummy hairpiece alone is enough to sink him.

That having been said (he adds, returning to sunny idealism), the outcry among conservatives at Lott's remarks (and the initial relative restraint among liberals) extends a building (and most welcome) trend against racial polarization in American politics. (The defeat of several polarizing candidates in 2002 also provided consipcuous evidence of this tendency.) I haven't seen it mentioned, but I suspect that one of its biggest accelerators was the WTC attack; nothing promotes national unity better than an external enemy, and racial animus post-9/11 now seems not only quaintly outdated, but arguably downright unpatriotic. Somehow, I suspect that improved racial harmony in America is not the effect the September 11th hijackers had in mind....

Monday, December 09, 2002

Mark Kleiman is tussling with a few folks who resent his (and others') calling Elliott Abrams a "felon". Now, I agree completely with his point that while "innocent until proven guilty" is a perfectly appropriate standard in a criminal court, public discussions of public figures are free to use less rigorous standards of proof when judging criminality. Hence, describing Abrams as a "felon", despite his having plea-bargained his way to a couple of misdemeanors and an eventual pardon, is not unreasonable by the standards of casual (e.g., blog) conversation.

But, significantly, Kleiman also felt compelled to throw in an offhand-seeming remark that "Abrams wasn't deceiving the Congress about his sex life". In other words, it's not true that to him, a felon is a felon is a felon; allowances can be made for, say, people forced under oath to choose between mendacity and political suicide, and opting for the former, in the course of a political witch hunt decried by Jeffrey Toobin in a book as a scandalous misuse of prosecutorial power for political ends. Well, under that criterion, Abrams, too, would get the benefit of the doubt.

(Of course, Kleiman may well feel that the matters about which Abrams misled Congress were more the investigator's business than those about which a certain former president misled a certain plaintiff's lawyers. And personally, I'm inclined to agree. But then, that president didn't have to sign into law the statute that made the topic about which he was later pressured to dissemble a permissible line of investigation for sexual harassment plaintiffs. That he did suggests that by his own standards, the investigators who later caught him perjuring himself were acting legitimately.)

Hence, in the end, while I won't quibble with Kleiman for referring to Abrams as a felon, he shouldn't pretend that he's doing so merely because Abrams is, in the casual sense, a felon. More likely, it's because he objects strenuously to Abrams' past political acts and views (and quite possibly his present ones, as well). Wouldn't it be more appropriate, then, for discussions of the man to concentrate on his merits or demerits as a political figure, rather than legal hairsplitting? Isn't that what we both would wish for, after all, in our conversations about that former president with the embarrassingly similar problem?

Saturday, December 07, 2002

The New York Times may not allow columnists to disagree with its editorial page, but it apparently does allow op-ed pieces to conflict with columnists. For example, Chinese dissident Gao Zhan's critique of the Chinese education system directly contradicts Nick Kristof's glowing praise for it a few weeks ago. Who's right?

Actually, the two columns agree on quite a bit; Kristof asserts that academically speaking, "Chinese parents demand a great deal" of their children, that "Chinese students may not have a lot of fun", but that they "are driven by a work ethic and thirst for education" that leads them to study many hours a day, seven days a week. Gao writes that "[t]he competition in big city schools is intense", that "[s]tudying 15 hours or more a day is commonplace", and that "[l]est you think the pressure is only on the children, parents are not exempt." They disagree on the breadth of students' education; Kristof writes that "the brightest kids are not automatons; many are serious enthusiasts of art, music, poetry or, these days, the basketball plays of Yao Ming", while Gao complains that "the system discourages intellectual inquiry, especially in the humanities".

But mostly, they disagree over whether all this drive for educational achievement is really a good thing in the first place. Kristof lauds the country's "educational success", attributing it to the attitude that "good students do well because they work harder", and the fact that "parents set very high benchmarks". Gao is principally concerned that the Chinese educational system is "hopelessly politicized, a vehicle of propaganda". But she also frets that "[t]he pressure on children can be cruel" and that students are "poisoned by cutthroat contests for academic success". In other words, this isn't primarily a difference about methods, or even the results of methods; it's about the very goals of the educational system. Kristof admires China's schools for producing outstanding academic achievers; Gao considers their successes irrelevant at best, and instead criticizes them for being "inhumane", producing "hotheaded nationalists and submissive cogs"--albeit scholarly ones.

And her extra-academic priorities are shared by plenty of non-Chinese educational critics. In a Washington Post commentary, writer Christine Woodside expresses unusual concern for a little-noticed (and not-usually-deplored) development on the American educational scene: the decline of recess. Although she raises the argument that a break in the daily schedule might actually improve classroom learning (a tough sell, empirically speaking, given China's experience), most of her claims focus on her concern that American schools might be (believe it or not) "forbidding fun".

Now, few such educational "reformers" openly express indifference as to whether children actually learn anything in school. But nobody could advocate making, say, driving school or medical school less rigorous, with fewer tests and more stress-free activities so that students won't feel so much pressure, without the obvious inference being drawn that the complainer cares little about the quality of the nation's drivers or doctors. And Gao's and Woodside's assertions that a less intense educational program would in fact be a more intellectually successful one are about as plausible as, say, the claim that children have more fun in a highly competitive school than in a slack one. There may be a tiny grain of truth to both statements, but anyone who makes them without massive caveats should be considered most likely completely disingenuous.

One can perhaps understand how a woman raised in an oppressive, totalitarian society could come to oppose highly disciplined education systems in general, associating them in her own mind with propaganda and perpetuation of state power. (That's not to say that the association is empirically valid; after all, Chinese students have been more involved in dissident activity, on average, than most segments of that society.) But what explains the popularity, here in the free world, of the notion that education--by far society's most powerful agent for personal, social, economic and technological advancement--is less important than adding to the modern Western child's copious quantities of leisure time?

Tuesday, December 03, 2002

James Taranto of "Best of the Web" notes a MEMRI report of a Saudi translation of a Kuwaiti newspaper's interview with Saudi Interior Minister Prince Naif Ibn Abdul Aziz. "I cannot still believe that 19 youths, including 15 Saudis, carried out the September 11 attacks with the support of bin Laden and his al-Qa'ida organization," avers the Prince. "It's impossible."

I guess they don't call him "Prince Naif" for nothing.....

Monday, December 02, 2002

Jacob Levy has posted a thoughtful response to my comments on "free speech" on campus. (He is also one of three bloggers recently added to my highly exclusive--though possibly not coveted, depending on what one thinks of my tastes--"blogs better than their exposure" roll.) We may not disagree as much as it seems; I will try here to sort out some of the intertwined issues.

First, we apparently agree fully that "time, place and manner" restrictions on speech--"no shouting outside dorms"-type stuff--can be quite legitimate. Levy also accepts that "[i]n-class speech that fails to advance an argument or to contribute to the academic enterprise is, of course, discouraged." (How vigorously it can be discouraged, Levy doesn't say.) Speech that is "merely offensive", however, is a different matter; according to the policy he wrote for the University of Chicago, it is not generally appropriate for "the University [to] intervene to enforce social standards of civility."

These last two positions form a striking contrast--with themselves, and also with traditional notions of academic freedom. In effect, they turn the usual form-content distinction on its head: we may forbid you to utter arguments in class that we consider invalid, but you are of course entitled to spew as many four-letter words as you please! That can't be what Levy has in mind--and it appears that indeed, it is not: " To single out a student for abuse," he writes, "to throw racial epithets at a particular person, to threaten with violence--these are over the line. They're violations of professional ethics and may well warrant university intervention." Now, to me, that sounds a lot like "enforc[ing] social standards of civility", and Levy seems quite comfortable with it.

What he is not comfortable with, as his examples show, is actually something very specific: restricting the expression of a specific set of ideas, on the grounds that a particular person or group finds it "offensive". Now, I happen to share this discomfort myself, but--and this is the crux of the issue, I think--I do not consider the principle at issue here to be the protection of free speech. There are, as our previous examples have shown, so many completely legitimate reasons for restricting free speech in an academic setting that standing on the principle in this class of instances invites charges of massive inconsistency.

No, the reason for not restricting speech based on perceived "offensiveness" is that this particular rationale for doing so is a terrible one. Left as is--"thou shalt not say things that are offensive to a particular person or group"--it is meaningless: who determines whether a given statement is "offensive" to a particular person or group? Defined more precisely--"thou shalt not say things that a particular person or group says it finds offensive"--it easily becomes grotesque: should any particular person or group be entitled to call down punishment on an arbitrary individual for the crime of having given offense? Even if such a rule were never used to stifle debate-worthy political or intellectual opinions (and there is every reason to suspect that it might), it would still be wide open to abuses of all kinds--blackmail, prosecution of personal or social vendettas, and so on. Why treat it as a free speech issue?

Now, it might be said that my distinction is a niggling, angels-on-pinheads affair; what does it matter whether a bad rule is shunned on free-speech grounds or on its own demerits? But as I explained in my previous posting, the "free speech" argument has been used to defend a lot of scurrilous behavior on campus, including acts that are profoundly damaging to the academic endeavor. These acts are not, pace Levy, a "red herring"; they are an excellent argument for avoiding overuse of the "free speech" shibboleth--especially when far more compelling arguments apply.

(Correction: Jacob Levy did not write the University of Chicago policy from which he quotes; I misread his reference to it as suggesting that. My apologies.)

Friday, November 29, 2002

Paul Krugman's undignified descent from distinguished economist to respectable economics-popularizer to partisan hack is already well-documented (by me and others), but even so, his most recent column must be read to be believed. In it, he states that (I'm not making this up) "[f]or most of the last 50 years, public policy took it for granted that media bias was a potential problem"; that "[t]he answer was a combination of regulation and informal guidelines"; and that because "much of that system has been dismantled", today "we have a situation rife with conflicts of interest", in which a "handful of organizations that supply most people with their news have major commercial interests that inevitably tempt them to slant their coverage, and more generally to be deferential to the ruling party".

Memo to Krugman: If you're going to accuse "major news outlets" of being "inevitably....deferential to the ruling party", it would help (1) not to do so in prompt, slavish imitation of recent remarks by Democratic party leaders Tom Daschle and Al Gore; (2) not to cite as your "most important example" a network (Fox News) whose nightly viewership is less than the daily circulation of the newspaper that carries your own twice-weekly, virulently anti-Republican column (to say nothing of all the other newspapers that publish your screeds as New York Times News Service features); and (3) not to propose government regulation as a solution at a time when the ruling party you so passionately rail against has just won complete control of the elected branches of the federal government (and hence, presumably, of any regulatory process that might be established).

Just a suggestion.....

Wednesday, November 27, 2002

Pharmaceuticals heiress Ruth Lilly has donated $100 million to Poetry Magazine, but Meghan O'Rourke, in Slate, and Eric Gibson, in the Wall Street Journal, both express their doubts about the gift's potential to rejuvenate the muse, described by poet Dana Gioia (whom Gibson quotes) as "the specialized occupation of a relatively small and isolated group". It should come as no surprise, though, that both articles, in bemoaning the obscurity and unpopularity of poetry, somehow failed to mention the one word that would have given the lie to their effete lugubriousness. That word, of course, is "rap"; and compared to its multibillion-dollar market, Ms. Lilly's paltry nine-figure bequest is a tiny irrelevancy in the world of poetry.

Now, I don't mean to claim that the major hiphop artists of our day are all creating poetic masterpieces. But then, neither were the greats of English-language poetry all living in a pristine world, blissfully free of the plague of mediocre doggerel. On the contrary, their work stood out precisely because it existed in the context of a living, even popular art form whose typical examples were, in retrospect, unmemorable or worse. Many of the giants achieved acclaim in their own day; some toiled in obscurity, only to be appreciated much later. But until Matthew Arnold popularized the idea of art as something that an audience needed to be taught to appreciate--setting the stage for the devastating schism between popular and "high" art that has left forms like poetry in such a sorry state today--none of the greats would have shied away from comparisons of their work with that of their most fashionable, most popular (and, they would confidently have asserted, self-evidently inferior) contemporaries.

But today's poets don't want to face such comparisons; rather, they consider themselves to be doing something entirely different from their pop counterparts--even as they reject any restrictions or boundaries on the form or content of their own work. They hide behind their illustrious claimed predecessors because they are naked. They tartly note the cavernous gap in artistic mastery that separates P. Shelley from P. Diddy--glossing over their own conspicuous inability to bridge that gap themselves. They may not be as popular as rap performers, they sniff, but, more importantly, they are appreciated by--who? The 12,000-odd readers of Poetry Magazine?

Perhaps what poetry needs is an equivalent to jazz music: a popular form that intellectuals can respect, that shows up classical music's supposed descendants, the practitioners of "modern serious music", for the masturbatory noisemakers they are, and that isn't afraid to interact with, influence and be influenced by a mass audience and its cacophony of overlapping mass tastes. Rap may be empty drivel, after all, but the next Tennyson is far more likely to arise from its crowd-pleasing dynamism than from a few insular scribblers publishing obscurantist verses for tiny audiences and pathetically imagining themselves to be writing for the ages.

Monday, November 25, 2002

Federal Reserve Governor Ben S. Bernanke made an interesting speech last week, asserting the Fed's imperative to fight deflation by any means necessary. He also pointed out what is generally considered a basic economic fact:
[T]he U.S. government has a technology, called a printing press (or, today, its electronic equivalent), that allows it to produce as many U.S. dollars as it wishes at essentially no cost. By increasing the number of U.S. dollars in circulation, or even by credibly threatening to do so, the U.S. government can also reduce the value of a dollar in terms of goods and services, which is equivalent to raising the prices in dollars of those goods and services. We conclude that, under a paper-money system, a determined government can always generate higher spending and hence positive inflation.
Now, this would appear to be simple, inarguable common economic sense. But there are a couple of seeming exceptions that might give one pause. First there's Japan: after years of budget-busting Keynesian fiscal stimulus, the country is still mired in a deflationary slump, and the government--printing press and all--appears helpless to do anything about it. Many Japan commentators claim that it's bureaucratic incompetence, not lack of available means, that's hamstringing the government and preventing the necessary economic repairs, but the measures these pundits typically advocate--such as reforming the banking system, and allowing large but hopelessly debt-ridden concerns and banks with humongous portfolios of nominally-performing-but-effectively-deadbeat loans to go under--sound like they have nothing to do with simply running the presses and printing away deflation. Why, then, aren't they (apart from Paul Krugman, who has been pushing the inflation solution to Japan's ills for years now) all just advocating the obvious?

Well, perhaps Japan is an exceptional case--its savings rate is astronomical, the economy lacks transparency, the culture is inscrutable, etc. etc. etc. But there's another puzzling exception that hasn't been mentioned much in this context. It's a bit like Sherlock Holmes' famous "incident of the dog in the night" that didn't bark, and hence passed unnoticed despite its oddity. I refer to the case of the US over the last five years.

From October 1997 to October 2002, broad money supply (M2) in the US rose more than 44 percent; the broadest measure, M3, rose 55 percent. These are rates not seen since the early eighties, when inflation was much higher. The printing presses seem to have been running full tilt, but inflation has still been low enough to prompt worries of impending deflation. Where has all the money gone?

It turns out that more money does not always create more spending. As Stephen Roach of Morgan Stanley points out, "[newly generated] money must go somewhere, but initially it might be channeled into balance sheet repair, paying down debt, or a restoration of saving before it ends up in the real economy or the price structure. There are no guarantees of instant policy traction near a zero rate of inflation." This is particularly true of post-bubble economies like the current American one, where indebtedness is high, savings are low (and hence have much room to rise), and an investment-minded culture pumped full of cash may be perfectly happy to pour the surplus funds into yet another asset bubble (some suggest that urban real estate may already be playing that role, now that high-tech stocks have lost their luster). It would be comforting to think that Japan's malady is sui generis, and can be easily cured should it occur elsewhere. I just wish I were convinced....

Thursday, November 21, 2002

The academic blogosphere--including Eugene Volokh, Jacob Levy, and Glenn "Instapundit" Reynolds--is up in arms over threats to freedom of speech on college campuses. "[T]he regulation of merely offensive speech in classroom settings is an utterly noxious idea," writes Levy, and the rest resoundingly agree. I admit to being a trifle confused; my understanding was that the whole point of universities is that a student whose speech--in classroom presentations, on exam papers, in course assignments--is not even offensive but merely insufficiently scholarly can face penalties as severe as expulsion. Have things changed that much since I went to school?

Actually, not that much; on-campus ranting about free speech was popular back in my day, as well (although in Canada, where I studied, the rhetoric was always much more muted and less indignant). Now, I recognize the important role that free speech plays in a healthy democracy, as well as the dangers inherent in limiting free expression in society as a whole. But universities are not society as a whole, nor are they even democracies. They are institutions (ostensibly) dedicated to education and research, whose members voluntarily forgo all sorts of freedoms (such as the freedom to neglect one's education and the freedom to do shoddy research) for the sake of furthering the academic community's (and hopefully their own, similarly aligned) goals. And it would seem obvious that, say, broadly offensive speech (or, for that matter, false or even illucid speech) would often work to the detriment of these goals, by undermining reasoned, dispassionate debate.

Of course, the common response of free speech defenders to such observations is that university faculties and administrations, if given the power, would use speech restrictions to stifle legitimate political debate on campus. And it's certainly possible that some, or even many, might do so. Then again, those same administrations can easily use their power to grant or deny tenure, to fund departments, to admit or reject and graduate or fail students, and so on to exactly the same nefarious ends. And these instruments are scarcely less potent, in ruthless hands, than the right to make rules about, say, offensive language. It's hard to argue that, say, banning racial epithets on a campus has anything like the chilling effect of, say, requiring students to pass a course in which grades are given for writings and presentations based on their political content.

In fact, too much free expression has sometimes threatened the academic health of universities as seriously as too little of it. Thuggish behavior on campus--shouting down of speakers, destruction of leaflets or newspapers, even physically threatening behavior--often masquerades as "protest", with its perpetrators demanding absolute protection from punishment in the name of "free speech". The endless chanting of the free-speech mantra is thus a pitifully ineffective substitute for vigorous action to protect the scholarly collegiality of the modern academic environment.

Sadly, few academic leaders--let alone students or other citizens--take notions like "the scholarly collegiality of the modern academic environment" the slightest bit seriously these days. As I have written before, the modern liberal arts university is an institution adrift, bereft of serious purpose, and thus at the mercy of interest groups keen to hijack it to advance their own goals. In the hands of these groups, the "free speech" slogan is just another rhetorical bludgeon with which to pummel the university into submission; in the absence of serious defenders, or even a serious alternative vision, the university is helpless to defend itself.

In 1993, when University of Pennsylvania student Eden Jacobowitz was punished for shouting an insult (thought by some to be racist) at a group of African-American women who were celebrating loudly outside his dorm window, disturbing his (and his dormmates') studies, he became an instant martyr to the cause of free speech. Some decried him as a bigot; most deplored UPenn's persecution of him. Nowhere was the slightest attention paid to the real lesson of the story: that the University of Pennsylvania simply did not care about whether some students' raucous behavior might be disrupting their fellow students' efforts to study.

Then again, by 1993 the idea that a university might think to value studiousness over partying had long been relegated to academia's distant past. After all, such a policy might interfere with "free speech".

Tuesday, November 19, 2002

Andrew Sullivan is baffled that supposedly tough soldiers seem so skittish about the idea of accepting the presence of openly gay men in their midst. "Is it because they're afraid of being raped?", he asks. "C'mon. Assuming all gay men - or even any - are potential rapists is completely loopy. (And the same people who make this bizarre argument would scoff at a woman who screamed rape if a man looked at her in a sexually interested way.)"

Oddly enough, Sullivan doesn't evince the slightest mystification over the elaborate lengths to which the military goes to protect women--tough, hardened military women, mind you--from invasions of their privacy by men. Why aren't military showers, bunks and latrines co-ed? And why all the draconian rules against "fraternization" and so on? Are these women soldiers afraid of being raped by their well-disciplined colleagues? Are they afraid of being looked at by men with lust in their hearts?

Well, yes, actually--and understandably so. One doesn't have to believe that all straight men are rapists, or that the male sexual gaze is inherently brutalizing, to understand why women (in this culture, at least) feel unsafe bathing naked around male soldiers (or groups of them). It doesn't matter if most of the time, nothing untoward happens; it only takes one major incident--or a long-enough sequence of small, subtle, incremental steps--for all assumptions of safety to break down completely. (If you're a weakling to be bothered by glances, after all, then what about playful pats on the shoulder? Or elsewhere? Where does the line get drawn? How? And by whom?)

The instinctive anticipation of this threat of sudden breach or gradual erosion of personal safety is likely the source of that general feeling of discomfort that causes women to want to guard their privacy from men when forced into close quarters with them. It shouldn't be surprising, then, that men would want to take the same precautions with respect to gay men; after all, gay men may not be substantially worse than straight men in this regard, but there's no reason to believe they're any better (and, as straight men themselves are well aware, that's plenty bad enough, in the worst cases).

Of course, military training is designed to break down instinctive anticipations and general feelings of discomfort and replace them with rigorous discipline; and if it turned out one day to be militarily necessary to drill soldiers to get over their discomfort around gay comrades the way they get over, say, terror of enemy fire, then the army would no doubt do what had to be done. But each such psychic hardship imposed on soldiers exacts a toll, and for the military to wish to avoid an avoidable one, so as to be able to concentrate on the unavoidable ones, is hardly a demonstration of bigotry or cowardice. Rather, it demonstrates a recognition of, and respect for, the limits and costs of discipline, and an unwillingness to bury or deny those costs in order to indulge various varieties of political dogma.
In Ha'aretz, Danny Rubinstein argues that Palestinians would actually prefer a right-wing victory in the upcoming elections, because Sharon "did not succeed in reducing the violence or stopping the terrorist attacks", and his continued rule "is the only way Israelis will learn how powerless the right really is - and may in turn germinate the seeds of a just settlement." He may well be correct; the problem arises when one considers just how the Palestinians define a "just settlement". After all, if the definition looked anything like the permanent settlement Ehud Barak offered--or even the unilateral withdrawal being proposed by Labor leadership candidate Amram Mitzna--then surely the Palestinians would be rooting for a Mitzna victory. If Rubinstein is right, then, the Palestinian notion of a "just settlement" must be a much more far-reaching capitulation than even Labor doves are willing to contemplate--that is, something that most Israelis (understandably) consider tantamount to acquiescing in Israel's complete destruction.

I asserted a few months ago that according to "the solid majority view" among Israelis, they are already "the undeclared winning side in the conflict" of the last two years with the Palestinians. But there is a subtle assumption buried in that view: that Israel can continue to impose and even tighten its crackdown on the occupied territories indefinitely, until its residents eventually stop seeing their suffering as a worthwhile price to pay for maintaining their low-level campaign of terrorism against Israel. This assumption may be correct; but it's also possible that the current willingness of (according to polls) a majority of Palestinians to endure hardship of the worst sort, just for the sake of persevering in their efforts to kill as many Jews as possible, will continue for years to come. In that case, Israel faces a long period of walking on an extremely slippery tightrope between, on the one hand, indulging the temptation to resort to extreme cruelty in an attempt to hasten the moment of Palestinian abandonment of terrorism, and on the other, indulging the temptation to forget--as so many Israelis did from 1993 to 2000--that conciliatory concessions to a polity that enthusiastically embraces mass murder are ultimately suicidal.

Sunday, November 17, 2002

NEW YORK (ICBW) -- In the wake of the spectacular opening-weekend success of the latest Harry Potter film, "Harry Potter and the Chamber of Secrets", authorities are bracing for the likely consequence: a spate of children injuring themselves while imitating Potter's magic feats. "We want to warn the public that magic is a dangerous business," said Albus Dumbledore, headmaster of Hogwarts Academy of Witchcraft and Wizardry, and a technical consultant to the filmmakers. "Untrained muggle children shouldn't even attempt to dabble in it."

After the release of the first film in the series, "Harry Potter and the Sorcerer's Stone", numerous young viewers sustained injuries uttering backfired spells and playing quidditch with dangerously underpowered homemade brooms. This time, there have already been scattered reports of flying-car accidents and careless petrifications. "The magic stunts performed in the film all involved qualified Hogwarts-trained professionals," explained Dumbledore. "But many youngsters see a group of child actors appearing to use powerful spells and potions, and figure, 'hey, I can do that.'"

In preparation for the film's opening, hospitals throughout the US have stocked up on mandrake root and phoenix tears, and Dumbledore said his staff will be available around the clock to handle emergencies. "But the best precaution," he reminds viewers, "is to stick to non-magical pursuits. It's funny, really--if our own students had a choice, most of them would neglect their magic completely and spend all day playing video games and chattering on their blasted cellphones."

Thursday, November 14, 2002

Mark Kleiman writes about the harrowing story of thimerosal, a mercury-based vaccine additive that some suspect is responsible for the epidemic of autism that appears to have broken out in California in the 1990s. Republican Congressman Dick Armey has slipped a provision into the new Homeland Security bill that, according to the New York Times, "was apparently intended to protect Eli Lilly, the pharmaceutical giant, from lawsuits over thimerosal". All very shocking--until, that is, one checks out the CDC's position on thimerosal....
There is no evidence of harm caused by the minute doses of thimerosal in vaccines, except for minor effects like swelling and redness at the injection site due to sensitivity to thimerosal.
Now, I hold no brief for Dick Armey, and I don't care at all for special-favor clauses being sneaked into important legislation. But in the absence of proper tort reform (and in the presence of widespread hysteria about technology--including lifesaving technologies like vaccines), I would guess that it's more likely that this particular political move will end up saving lives (by impeding the onslaught of tort lawyers and junk-science scaremongers on the practice of universal vaccination) than that it will actually harm anyone.

Of course, I'm no toxicology expert, and if anybody can point me to actual, substantial evidence implicating thimerosal in vaccines as a cause of serious health problems, I'd be interested to hear about it. Perhaps, though, the CDC ought to be informed first.

Tuesday, November 12, 2002

The campaign to get universities to divest themselves of Israeli investments is heating up; both supporters and opponents are comparing it to the South Africa divestment campaign of the 1980's. And both sides are more right than they realize.

Of course, Israel is nothing like South Africa was. It's a full democracy with a universal franchise, not at all like the apartheid regime. It has recently engaged in a multi-year process of trying to set up the territories it occupies (as a result of a war provoked by legitimate casus belli) as an independent state, being stymied only by the refusal of the prospective government of that state to abandon terrorism against Israelis. Its own citizens are neither racially segregated nor otherwise politically oppressed, and have the full range of democratic freedoms, including speech and religion.

But then, South Africa wasn't a particularly obvious choice of target, either. It was hardly the worst human rights abuser of the era, even on its own continent. Opponents of the boycott routinely pointed out that South African Blacks were better off than they would have been in just about any other country in Africa, and the boycott itself caused no small amount of suffering among them. If one were to choose a political evil to target in the 1980's based on moral and humanitarian considerations, Apartheid would have been a legitimate but relatively minor choice, paling by comparison with literally dozens of others.

But that's the dirty little secret of politically motivated boycotts: they are not primarily chosen on the strength of their justifications or the urgency of their goals. Rather, their adherents participate in the hope of making a political point in some other, entirely separate context. The South African boycott, in truth, was about many things--race relations in the US, Cold War geopolitics in the Third World, and anti-corporate populism, to name three--but the actual conditions of non-White South Africans were at best peripheral. Likewise, today's university divestment campaigns have many motivations--"anti-globalist" leftism, anti-Americanism, even, on the fringes, some anti-Semitism--but sincere concern for the plight of the Palestinians (for the vast majority of whom the Oslo process has been an unmitigated disaster that a boycott of Israel would likely only further exacerbate) can't be very high on the list. It is fortunate that some major academic leaders are seeing through the sophistries and rejecting the divestment movement's meretricious moral case.

Monday, November 11, 2002

The Canadian government's occasionally impolitic positions with respect to the war on terrorism, to which I've alluded previously, and which are the subject of a piece by Jonah Goldberg in the National Review, may be somewhat puzzling to Americans. I will try to explain (without excusing) them; the explanation may also provide useful insight into some other Western countries' strangely unsupportive attitude towards American efforts against terrorism.

The first thing that Americans should understand about Canadian politics is that Canadians are, by and large, a politically uncommitted bunch. Only a tiny fraction of the population belongs to a political party, and most of the rest are happy to vote for whichever party seems to be addressing the pocketbook issues of the day (or to be winning handily enough to be worth currying favor with, in the hopes of receiving a greater share of federal pork-barrel spending after the election). To the extent that there is any mass partisanship in federal politics, it is largely a matter of inter-regional conflict, with parties increasingly representing their regional power bases. Canadian foreign policy is simply not on the political radar screen, as Canadian voters understand perfectly well their country's utter insignificance in the geopolitical arena.

As a result, Canadian governments target their foreign policy largely at the small domestic constituency that actually cares about it. Naturally, this group disproportionately inhabits the academic and media worlds, where it clings, like its American and European counterparts, to a familiar breed of woolly-minded leftish anti-Americanism with the dogmatic uniformity typical of small, concentrated intellectual groups. Canadian journalists and academics are also somewhat self-selected for anti-Americanism, since the most successful among them usually have the option of enhancing their prestige and paychecks south of the border--an option many of them exercise, unless they are strongly disinclined to do so--and the less successful thus have ample cause for "sour grapes" resentment of an American cultural and intellectual pre-eminence that excludes them.

There is also a strain of anti-Americanism that runs through most segments of Canadian society, and that has little to justify it beyond common "us vs. them" home-team-rooting. It's not particularly intense or virulent, though, and it's counterbalanced by Canadians' general sense of neighborly good feeling towards folks south of the border. (A large fraction, after all, have friends or relatives in the US, visit often, and are deeply immersed in popular culture. Pernicious stereotypes about American national characteristics are hard to sustain under those conditions; one has to live among Americans for years, as I have, to develop them.) But on issues that don't really matter (and let's face it: what Canadian politicians have to say about world affairs almost never really matters), playing to anti-American peevishness rarely causes a politician lasting damage.

On matters of substance--i.e., action--though, I believe that a solid majority of Canadians invariably stand firmly with their American allies. They helped house stranded American travelers on September 11th, when American flights were grounded; their soldiers joined the US in the Afghanistan campaign; and they continue to cooperate with their neighbors on a variety of continent-wide security matters. The longest undefended border in the world will no doubt remain undefended--and friendly--for a long time to come.

Sunday, November 10, 2002

Thomas Friedman's at it again. His latest column says a lot of really silly things--such as (I'm not making this up) that the "Bush hard-liners" who hope to invade Iraq and topple Saddam Hussein don't "really want to invest in making the world a different place, or....have any imagination or inspiration to do so". (Wiser souls, he explains, appreciate the far greater world-changing power of--I'm not kidding--"diplomacy".) But his introductory paragraph repeats a canard whose absurdity will be apparent, sadly, to all too few readers. He cites, with approval, a "senior European diplomat" who complained that the Bush administration is failing to tell Israel that it "needs to find a secure way to get out of the settlements."

Now, it may well be that an Israeli-Palestinian peace agreement will one day be signed, one clause of which involves the evacuation of most or all of the Jews living in the West Bank and Gaza. (I'm skeptical--it seems unlikely that Palestinians would be willing to live at peace with a Jewish state a few miles away, but not with a Jewish neighborhood on the next hill--but I suppose it's still possible.) However, it doesn't make a lot of sense to start talking to the Israelis about dismantling settlements in the occupied territories when they're currently militarily occupying almost all the West Bank's major cities. As for the Palestinians, they conspicuously do not refer to the violence that first erupted in September 2001 as the "Settlements Intifada", were not responding at the time--even as a pretext--to a visit by Ariel Sharon to a settlement, never refer to removal of the settlements as their primary goal, and do not discriminate between civilians living in Israel proper and those in the territories when executing their terrorist attacks. Nor is there any indication that the failure of the Camp David and Taba negotiations hinged in any significant way on the settlements--an issue on which the Israelis were in fact quite flexible, to no avail.

The real importance of the settlements, though, lies in the role they play in Friedman's worldview and that of his European diplomatic friends. For them, it is crucial that they find something that Israel must be cajoled into conceding--otherwise, their negotiations-based strategy is self-evidently doomed to fecklessness, given the Palestinians' refusal to bow even to harsh Israeli military pressure, let alone to mere diplomatic pestering. But what Israeli concession can they possibly portray as a key goal for the diplomats? Military restraint is a non-starter, from Israel's point of view, since it's been repeatedly and amply proven to be of no use whatsoever in winning Palestinian reciprocity. Likewise, most of the generous long-term offers spurned at Camp David, such as full statehood and compromise on Jerusalem, have lost all their plausibility as bargaining chips in the eyes of Israelis.

Settlement-dismantling, on the other hand, is not a completely quixotic goal; it still retains a modest constituency within the Israeli body politic, mostly for various internal political reasons. Hence, if Friedman et al. can (mis-) represent settlement evacuation as the potential breakthrough step in a process of mutual compromise, then--voila!-- they can claim a vital role for diplomacy in resolving the conflict. It's a slender reed, to be sure; but it's the only one available, and without it, they would literally have no justification, however feeble, for trying to insinuate a diplomatic component into Israel's muscular (and comparatively far more effective) response to terrorism.

Monday, November 04, 2002

A few months ago, Slate somehow managed to cajole the unlucky Virginia Heffernan into subjecting herself to an episode of HBO's "Real Sex" television series and then reporting her impressions to readers. Her staggering conclusion: the program's "soft sociology provide[s] an excuse to look at soft pornography". The article must have drawn a high hit count, though, because now they've sent Emily Nussbaum to check out New York's Museum of Sex. While Ms. Nussbaum is actually fairly upbeat, describing one exhibit as "an impressive combination of titillating and educational", she can't quite disguise the museum's real target demographic, admitting that the guards "have to shoo people away" from the hardcore stuff. Odds are that most of the patrons "transformed....into zombies" by the "lurid close-up genital pistons of a Dolores Del Rio porn film" don't share Nussbaum's gender.

Why, then, did Slate twice send a woman to do a man's job? Perhaps because an honest male assessment of this ever-so-slightly-dressed-up smut would have to be brutally frank about its producers' obvious goals in presenting it--and hence, by implication, about Slate's obvious goals in reviewing it.

Sunday, November 03, 2002

Lisa Dusseault tells an amusing story about an encounter with Canadian tourists in San Francisco who, on hearing that she is an expatriate Canadian living in the US, commiserate with her plight. I understand her nonplussed reaction; I must at least partially disagree, however, with her claim that "it's not so different" living in the US, as opposed to Canada.

In one sense, she's clearly correct: many of the horror stories that Canadians tell each other (and themselves) about America are built around serious misunderstandings of their southern neighbor. I can say this with some authority, since I labored under several such illusions myself while growing up in Canada, and was only thoroughly disabused of them when I moved to the US. For example, Canadians hear about the horrific American crime rate, and assume that life in America is a daily crap-shoot for survival. In fact, the means to insulate oneself from crime--peaceful, safe suburban neighborhoods and well-protected shopping areas, workplaces and recreational districts--are readily available (and affordablly accessible) to a very large fraction of Americans. Like Lisa, I have never been a crime victim in the US, and I certainly haven't had to spend a fortune to buy my safety, as Canadian myths would suggest.

Similarly, health care was once the canonical example of the contrast between terrifying American chaos and reassuring Canadian orderliness. Again, though, most Americans have employer-provided healthcare that provides a level of protection comparable to the standard Canadian regime (indeed, arguably superior to it, given the tales I've heard lately about the collapsing Canadian health care system). And US politics, for all its faults, is much harder to criticize these days in light of the appalling way that Canadian officials have embarrassed themselves when commenting on American foreign policy.

And then there are the attractions of American life. For example, the US has a service ethic far superior to Canada's; being a customer of any kind in America is a real joy compared with Canada's more, uh, European approach to customer care. And, as Lisa points out, economic opportunity can also be considered a quality-of-life issue: a more enjoyable, interesting, challenging job represents a lifestyle improvement above and beyond any material standard-of-living increase it may provide.

And yet...there are real cultural differences that can make the adjustment to American life difficult for a born-and-bred Canadian, even after the misconceptions have been discounted. For example, I find American interpersonal culture to be marked by a peculiar level of unabashed self-centeredness and self-indulgence--a kind of naive thoughtlessness about the feelings and concerns of others when it interferes with one's own "pursuit of happiness". I'm not making a political statement here, or alluding to any grand philosophical principle; rather, I'm speaking of the very fabric of day-to-day American social interaction. I once had lunch in a restaurant in California with a group that included a German friend; when this friend needed to squeeze behind another diner's chair at the next table in order to leave the restaurant, the other diner, rather than shift his chair to allow my friend to leave, continued his conversation for several minutes, happily oblivious to the buttocks mere inches from the back of his head. Eventually my polite friend was forced to bring his problem explicitly to the gentleman's attention, at which point he was happy to assist by shifting his chair forward slightly. "Only in America", muttered my friend after we had left. I believe he's right; many other cultures tolerate behavior that North Americans might consider deliberately rude, but only in the US is it unsurprising that an otherwise non-hostile person would so egregiously fail, in all innocence, to take others' concerns into consideration at all. To someone raised on diffident Canadian politeness, the adjustment to this American-style solipsism can be difficult.

There are other differences, as well. America is a much more class-conscious society than Canada, in which people are keenly aware of markers of social status. (I can't imagine a Canadian, for instance, conspicuously dropping blatant, smug references to his or her alma mater, the way many ivy league-educated Americans seem to--sometimes literally within minutes of meeting me.) On the plus side, the level of diligence, industry and entrepreneurialism in the US far outstrips the Canadian norm. (The experience of shame at one's own laziness also requires some adjustment, as it turns out.)

None of these differences is in itself particularly taxing to deal with, of course; nor do they, taken together, justify receiving condolences from visiting fellow Canadians. But they do cause me to miss, on occasion, the country of my birth--and to experience a certain feeling of warm comfort on each return visit to the country I still think of, in a way, as home.

Friday, November 01, 2002

Mark Kleiman and Eugene Volokh are apparently both of the opinion that the Boy Scouts' policy of excluding atheists amounts to a kind of religious discrimination. (Both concede the Scouts' Constitutional right to their policy, but consider it morally wrong nonetheless.) As Volokh puts it, "[i]f the Scouts excluded Catholics -- everyone else, Jewish, Protestant, or what have you is fine, but not Catholics -- we'd rightly condemn them, even if they said 'Rejection of Catholicism is one of our core beliefs.' Likewise, I think, when they exclude atheists." (Kleiman makes the same point, right down to the choice of analogy.)

I rather doubt that the Scouts' opposition to atheism is as narrowly defined in practice as Kleiman and Volokh claim; would the Scouts accept, for example, a Satanist troop, or one that worships only a particular (living) charismatic cult leader? If the Scouts turn out to have bona fide doctrinal standards compatible with most religions but exclusive of a few, then they would be no different from a group that rejected, say, believers in performing child sacrifice rituals or murdering all heretics (except, of course, in the sense that Kleiman and Volokh probably find adherents of such ideas far more worthy of exclusion than those, including atheists, who happen to offend the Scouts' somewhat woolier religious principles).

I also wonder whether the two professors have paused to consider in just what company they have placed themselves with their choice of analogy. The most conspicuous advocates of the idea that atheism is a religious conviction--comparable to, say, Catholicism--are fundamentalist Christians attempting to inject "Creationism" into the public-school curriculum. After all, if atheism is a religion, just like literalist Christianity, then it's perfectly valid to claim that Darwin's theory of evolution is as much a religious position as is the "Genesis theory" of human origins. Likewise, if absence of religion is just another religious creed, then school voucher programs that encompass confessional schools are not only Constitutionally permissible--they might even be mandatory, under the judiciary's current broad reading of the First Amendment, to prevent the government from "establishing" atheism, over all other religious doctrines, as the official "faith" of the public school system.

But atheism is not a religion; the absence of religion is very different from the presence of one. Teaching evolutionary biology in science class, and rejecting all of its religious alternatives, is not the same as teaching a single religious alternative. Forbidding the promulgation of any religion in public schools is nothing at all like exclusively promulgating a single one. And likewise the Boy Scouts, in requiring their members to affiliate with a religion--any religion--are not "excluding" a particular religion. Kleiman and Volokh should be happy about that; the consequences of atheism being designated by convention as just another religious belief would be most unlikely to please either of them.