A Washington Post opinion article by Steven Pearlstein entitled “Four tough things universities should do to rein in costs” got a lot of buzz on social media yesterday. The attention paid to it isn’t surprising: it’s election season, after all, and skyrocketing student debt numbers have focused candidates’ attention on the cost (and value) of higher education. While that’s not a bad thing in and of itself, many of the arguments leveled by college critics don’t stand up to close scrutiny, as Dan Drezner usefully points out in his Post column today.
One point that Dan makes deserves further elaboration. In the original article, Pearlstein writes the following:
“The vast majority of the so-called research turned out in the modern university is essentially worthless,” wrote Page Smith, a longtime professor of history at the University of California and an award-winning historian. “It does not result in any measurable benefit to anything or anybody. . . . It is busywork on a vast, almost incomprehensible scale.”
The number of journal articles published has climbed from 13,000 50 years ago to 72,000 today, even as overall readership has declined. In his new book “Higher Education in America,” former Harvard president Derek Bok notes that 98 percent of articles published in the arts and humanities are never cited by another researcher. In social sciences, it is 75 percent. Even in the hard sciences, where 25 percent of articles are never cited, the average number of citations is between one and two.
That is true, in the sense that those numbers do appear in Bok’s book (though it is worth noting that Bok’s overall assessment of higher education differs dramatically from Pearlstein’s). As Dan points out, however, the numbers are “badly outdated, relying on a study that first appeared in 1990 and compares apples and oranges.”
I’d go a bit farther: not only are the numbers outdated, but the general claim—that an increased volume of academic research has produced a corresponding decline in relevance—is exactly backward.
In a recent issue of the Journal of the American Society for Information Science and Technology, Vincent Larivière, Yves Gingras, and Éric Archambault take on the impressive task of tallying citations per article from 1900 to the present, using data from Thompson Reuters’ comprehensive Web of Science. (An ungated copy of their article is here). The authors examine the number of citations that an article has received two and five years after publication. They find that, despite the remarkable increase in the number of journals published, the percentage of papers receiving at least one citation has been steadily climbing for decades. Their Figure 1. demonstrates that this trend holds across all fields except the humanities (in which, as they note, scholars are far more prone to cite books rather than articles).
These trends shouldn’t come as much of a surprise to, well, anyone who reads academic journals. Most journal articles generate far more citations than they receive, so the increase in the number of academic journals over time has produced far more citations than it has citable articles. As Larivière, Gingras, and Archambault also point out, those citations increasingly go to more specialized articles in more specialized journals—a welcome indicator of the growth of knowledge.
It’s still the case, of course, that a significant number of articles go uncited. But anyone who knows the Web of Science knows that it tracks citations in some really obscure journals: any academic can point to a significant number in his or her own field that he or she has never even heard of. Such journals contribute disproportionately to the number of uncited articles.
Even taking those articles into account, though, the numbers tell a pretty unambiguous story: if we use the percentage of uncited articles as an indicator of the irrelevance of academic research, as Pearlstein himself does, academic research is more relevant now than it has ever been before.