One of the recipients of the 2009 Nobel Prize in Physics was Charles Kao (shown here), who laid the empirical and theoretical groundwork for fiber-optic communications. His breakthrough paper, written in 1966 with George Hockham, appeared in IEE Proceedings, the journal of Britain’s Institution of Electrical Engineers.
The IEE and its proceedings no longer exist. The society merged with the Institution of Incorporated Engineers in 2006 to form the Institution of Engineering and Technology. Papers on fiber optics now appear in IET Communications.
I don’t know what IEE Proceedings‘s impact factor was in 1966, but its successor’s impact factor in 2011 is 0.963. In case you’re unfamiliar with the definition of impact factor, a value of 0.963 means that papers published in IET Communications in 2009–10 were cited in 2011 an average of 0.963 times.
An impact factor of less than one is modest. According to Google Scholar, Kao and Hockham’s paper has been cited 453 times, which is far from modest.
A weakening relationship
Given the technological importance of fiber-optic communication, it’s not surprising that Kao and Hockham’s paper has appeared in so many reference lists, despite being published in a low-profile British engineering journal. In general, however, ambitious scientists tend to submit their best papers to high-impact-factor journals. Whether scientists like it or not, impact factor has become a measure of scholarly value. And if the best papers do indeed appear in the best journals, citations should correlate positively with impact factor.
But while impact factor continues to loom large in academia, scientists and other scholars increasingly discover individual papers through search engines and arXiv, which ignore a paper’s publishing home and impact factor. If scientists begin to publish without regard to a journal’s perceived prestige, then the positive correlation between citations and impact factor will weaken.
Thanks to a new paper by George Lozano, Vincent Larivière, and Yves Gingras, we don’t have to speculate about the relationship between impact factor and citations. The three authors, who are all based at the University of Montreal, analyzed bibliometric data from 1900 to 2011. They calculated impact factors and determined citation rates in three subject areas: natural and medical sciences combined, social sciences combined, and physics alone.
In all three areas, the correlation between impact factor and citations grew steadily stronger from 1900 to around 1990, when it peaked and then began to weaken. The trend was sharpest in physics, whose practitioners were among the earliest and most enthusiastic adopters of internet publishing; it was weakest in the social sciences.
Lozano, Larivière, and Gingras also determined what fraction of the 5% most cited papers appeared in the 5% most cited journals. From 1900 to 1960, the most cited journals attracted on average a more-or-less steady 1.4% of the most cited papers. Eugene Garfield and Irving Sher developed and introduced the impact in the early 1960s. Thereafter, whether coincidentally or not, more and more of the most cited papers appeared in the most cited journals. By around 1990, the percentage had peaked at 2.2%. By 2011 it had fallen to 1.9%.
The use of impact factor to evaluate papers and, by extension, authors evidently irks Lozano, Larivière, and Gingras. To a list of six existing criticisms of the use of impact factor in academia, they add their own finding and argue (with my emphasis added) that:
As the relationship between paper citation rates and [impact factor] continues to weaken, and as more important papers increasingly appear in more diverse venues, it will become even less justifiable to automatically transfer a journal’s reputation and symbolic capital on to even its most recently published papers. This should force a return to direct assessments of paper quality, by actually reading them.