Aficionados of political debate are familiar with "and so we see...." stories--pedantic screeds spinning a single simplistic anecdote into a supposedly iron-clad proof of some partisan thesis: that the private sector always works better than the government (or vice versa), or that moralism (or realism) is always the more successful foreign policy, or that prohibition (or legalization) of vices always causes far more problems than it solves. The academic world has its own favorite topic for its "and so we see...." stories: whether research should always be directed by pure scientific curiosity or by practical societal needs. A perfect example is Siddhartha Mukherjee's recent article in The New Republic, in which the tale is told of a scientist's doggedly impractical study of the anthrax bacterium, leading ultimately to a now-invaluable antidote to its toxin. And so we see, from this inspiring fable, that Vannevar Bush's legendary postwar decision to base government funding of research on peer review rather than politically determined goals was a necessary prelude to every scientific advance made since then.
The most obvious thing wrong with this claim is that it simply doesn't reflect reality. Scientific research in America may be peer-reviewed, but government funding priorities are set with plenty of input from the political echelon. John Collier's study of the anthrax toxin may sound relatively obscure and useless to a doctor, like Mukherjee, accustomed to research aimed directly at curing human diseases, but to, say, a particle physicist, Collier's work is about as practical as it gets. Real research isn't divided into neat "pure" and "applied" categories; it's all at least somewhat influenced by both scientific and human values.
And what mix of the two produces the greatest breakthroughs? Nobody knows. Alexander Fleming's discovery of penicillin was the result of his aimless fiddling with mold specimens--but Salk's and Sabin's polio vaccines were the results of heavily funded, highly directed applied research programs. And for each such breakthrough, there are numerous examples of both curiosity-driven and goal-driven research that went nowhere and accomplished nothing. In practice, both the directed and undirected approaches to research can play a useful role in identifying topics ripe for rapid progress, and compensating for the other's blind spots. Collier is certainly correct, for instance, in pointing out that targeted funding can produce "a lot of junk aimed at getting some of that pork-barrel money"; but peer-reviewed funding tends to produce entire pools of researchers reinforcing each other's misplaced interest in the minutiae of long-sterile fields. Both errors can be mitigated by judicious application of both peer review and societal guidance.
If the history of great scientific advances suggests anything about what fosters them, it is that they tend to occur when their time has come, pretty much regardless of the research environment of the moment. Think of the classic discoveries (calculus and evolution are two famous examples) that were made entirely independently, around the same time, by two completely separate individuals with the same spontaneous insight. More recently, the relative isolation of the East Bloc scientific community during the cold war provided numerous cases of the exact same phenomenon (as has the separation of the "classified" and "public" research worlds in the West). Great breakthroughs have resulted from unfettered research freedom, from narrowly circumscribed, goal-oriented effort, and from every gradation in between. And anyone (like Mukherjee) who claims to have the perfect recipe for successful research investment in the future obviously doesn't understand the baffling history of research's past.