An anonymous resignation letter from a Swiss graduate student has made a bit of a splash on the Internet lately, with many noting the uncanny accuracy of his unflattering portrayal of modern academic research. I recommend a full reading, but the gist of his description is that academic researchers today have no interest in producing what an external observer would describe as "good research"--that is, research that significantly increases human knowledge and benefits humankind. Instead, they devote their efforts to advancing their own petty academic-political interests: producing a large volume of narrow, conformist, incremental publications that improve their publication statistics and citation numbers; playing political games to advance their standing in the "research community"; and exploiting their graduate students for the benefit of their own careers, at the expense of the students'--and the field's--best interests.
It's easy to compare today's research world unfavorably with its counterpart of fifty years ago or more, since what remains of the latter by this point is little beyond its legendary successes, and certainly not its day-to-day workings. But the truth is that academic research has always had more than its share of mediocrity drenched in politics, obscurantism and conformity. Consider, for example, this little memoir, written in 1988 by the late great Middle East scholar Elie Kedourie (and recently pointed out by his fellow Mideast scholar Martin Kramer). It turns out that Kedourie, as well, abandoned his doctoral studies, and for reasons not at all dissimilar to those of the Swiss graduate student. But Kedourie was not a modern Swiss scientific researcher, but rather a history student at Oxford in 1953.
Unfortunately, the problems that have plagued researchers from Kedourie in 1953 to many young scientific researchers today are in fact fundamental weaknesses in the basic structure of academic research. Researchers have always formed their own tiny expert communities, and thereafter demanded to be evaluated solely on the basis of peer review--that is, on the assessment of that tiny community, with no outsider being considered sufficiently expert to pass judgment on their work. Academic newcomers are then forbidden entry until they first complete a multi-year program whose primary requirements are slavish conformity to the methods and practices of their graduate advisors' community, and voluminous difficult work exercising those methods and practices to the exacting standards of the graduate advisor and a small, hand-picked selection of his or her community peers. And tenure has always guaranteed that these mini-communities will continue their hair-splitting line of research long after the last shred of value to outsiders has been painfully squeezed out of it. It's hardly surprising, then, that academic research has historically been dominated by stifling conformity, petty politics and small-minded obscurantism.
If today's community is different from those of, say, a hundred years ago, the difference is primarily that research today is a "day job" in a way that it wasn't back then. In the era when virtually every professor was a poorly-but-steadily-paid college teacher indulging his scientific, literary or historical obsessions in the gaps between classes and perhaps publishing the odd fusty monograph every few years for an audience numbered in the dozens, it scarcely mattered whether those academic obsessions were with the great unsolved problems in science or, say, the mating habits of a particular unremarkable species of butterfly. Today, on the other hand, the average professor is fairly well-paid, with a light teaching load, either full tenure or a near-term expectation of it, and the promise of a multi-thousand-dollar research budget to spend on conference travel, graduate student assistants and other perks--if only he or she can generate the requisite volume of peer-blessed publications.
What was once an eccentric but harmless academic idiosyncrasy--the practice of publishing technical expositions of one's own research that at most only a tiny audience of peers will ever read--has thus become an enormously expensive and wasteful boondoggle. Abilities such as "selling" one's papers (writing them in a way that impresses one's peers), forming and leading networks of mutually logrolling researchers (much like the "alliances" in the reality TV show, "Survivor"), and crafting CVs and proposals that coax grants out of funding agencies, are now core academic skills much more important to career success than, say, deep scientific insight or vast erudition, much less teaching ability. The resulting research bears all the signs of this change: most of it is shallow and irrelevant, much is sloppy and error-ridden, and very little of it has a shelf life longer than the few months it takes to get it published and tacked onto a personal publications list.
It's not clear how to solve this problem, but a few obvious (though sadly unrealistic) mitigations come to mind. First of all, since 99 percent of all research is worthless, we should start by vastly shrinking the pool of researchers. The rule that virtually all full-time college professors must generate research as part of their employment is an absurd result of tiny colleges trying to increase their stature by emulating the top universities. It ends up not only generating a flood of pointless, abysmally low-quality "research", but also undermining the higher-quality research communities, forcing them to compete with the mediocre majority for funds, recognition and adherents
Second, abolishing tenure certainly won't solve the entire problem--after all, most of the worst research is produced by workaholic untenured researchers, frantically churning out publications with which to establish their case for tenure. But the protections of tenure certainly contribute to the insularity with which academic researchers indulge their worst instincts without fear of adverse consequences. And given that academic training and peer review are guaranteed to squeeze every drop of non-conformist independent-mindedness from the peer-obsessed researcher, the likelihood of a researcher using the grant of tenure to break free from the constraints of conformity and rebel against the herd are negligible. Tenure has thus failed in its only justifying purpose, and never comes close to paying back its enormous costs.
Third--and most importantly--the evaluation of academic research needs to be opened up to a much wider range of assessors. As long as researchers can rely on log-rolling among peers to protect them from external accountability, they will continue to ignore any measures of the value of their research other than their own. The only way to force them to take into account criteria such as economic value, societal impact and the public's priorities, is to make them accountable to commercial, political and popular representatives, not just fellow academics. The howls of outraged disgust that invariably greet such suggestions reflect not only the academic dogma that asserts that anyone outside the holy circle of researchers is an ignorant yahoo incapable of grasping even the basics of evaluating scholarship, but also the rather baser fear that external evaluators might not be quite so indulgent towards researchers as they are toward themselves and each other. It's high time that dogma were dispensed with, and that fear realized.
Subscribe to:
Post Comments (Atom)
2 comments:
I agree with most of the content. In particular, I think the time has come for non-peer reviews of articles to be published.
The only part that I question is your suggestion that criteria for areas of research should involve some economic benefits or have a societal impact of some sort. I always understood that the idea of tenure was to enable to produce research results that may not have any practical consequences or any effect on life in general but merely advance knowledge for its own sake.
I don't think that's the primary purpose of tenure. In fact, tenure doesn't actually provide that kind of broad protection, since it doesn't protect against an administration deciding to shut down an entire university department--say, because it judges the department's overall research output to be of insufficient practical value. Rather, the original justification for tenure was to prevent administrations from firing individual professors whose research was politically or scientifically controversial--say, incurring the wrath of politically powerful figures either in the university or in the government that supports it.
Regardless, the problem with encouraging professors to "advance knowledge for its own sake" is that modern research typically requires considerable material resources--including the research-directed portion of the professor's salary, as well as the costs of equipment, graduate students, support staff, conference travel, and so on. The people who provide all those resources--most frequently, taxpayers--would likely be quite loath to provide them so generously if they had absolutely no assurance that the resulting research has at least a reasonable chance of providing practical benefits in return. And in fact, researchers are generally happy to provide suitably grandiose--though often wildly disingenuous and laughably tenuous--arguments for the immense practical significance of their research. All I'm asking is that those arguments be opened up to much wider and more thorough scrutiny, so that when those who bankroll the research decide whether to continue to do so, they can rely on more complete and balanced information than they typically get today from the researchers themselves and a few handpicked colleagues.
On the other hand, you seem to believe that having researchers "advance knowledge for its own sake"--that is, study absolutely whatever they feel like studying, on the university's dime, with no questions asked--would be a good thing all around. Could you perhaps explain why you believe that?
Post a Comment