I have an admission to make. All the way through my Ph.D. studies and on into my first postdoctoral stint, I had no idea what an impact factor was. I still remember my first encounter with the concept. A number of fellow postdocs and students were discussing which journal a particular paper that one of them was working on should be sent to. After a bit of listening (and probably nodding along cluelessly with the discussion) I found a computer and looked it up. Most of you reading this probably know what it is. But, for the record, it is a measure of how many times recent articles in a given journal are cited compared to recent articles in other journals. And this is supposed to allow ranking of a specific journal’s importance compared to others. Of course, this whole endeavor is fraught with problems. But even so, it’s become well nigh impossible to hold an extended conversation about academic publishing with a group of scientists without impact factor considerations coming up.
I have another admission to make. Until I began the process of applying for tenure awhile back I had never heard of an h-index. Suddenly I found it was as vital to my academic life as is the oxygen level in my blood to my real life. So, off I went to Google Scholar where I found that not only was my (decent, but somewhat modest) h-index calculated for me, but so was my i10-index. I hesitate to bore you with details, but in case you don’t know what these are and really need the information here you goβ¦
To calculate your h-index, put your papers in order from most cited to least cited. Then count down the ranked papers from top to bottom until you get to the last point where a paper has at least as many citations as its rank on the list. That is your h-index.
An i10-index is simpler β it’s the number of your papers with at least 10 citations.
Both of these are influenced by age or, more precisely, academic age (how long you’ve been in the game) and by how much other people make use of your findings in their own work.
To a science outsider these measures might sound a bit odd. But despite their issues they are now the standard for how university administrators, granting agencies, and others judge academic work. For better or for worse scientists and their publications are now part of a Google-sized numbers game.
Is it in the best interests of science, and society, that measures like this are the yardsticks used to judge scientific worth? Joern Fischer, Euan Ritchie, and Jan Hanspach argue a persuasive “no” to that question in a short opinion piece in TREE (27:473-474) entitled “Academiaβs obsession with quantity.” They explain that, among other things, the quantity obsession is concentrating huge amounts of resources among a small cadre of large research groups. And the push for speedy publication in high-impact journals is forcing a focus on fast and shallow rather than reflective thought, deep experimentation, and patient observation. Careful lab research and long-term field studies are taking a back seat to expedient and efficient β but ultimately less satisfying β answers. Beyond that, and arguably more importantly, the love of indices is hurting the families and other relationships of academics.
To quote Fisher et al.: “(the) modern mantra of quantity is taking a heavy toll on two prerequisites for generating wisdom: creativity and reflection.”
Charles Darwin’s voyage on the Beagle lasted from 1837 to 1839. “On the Origin of Species” was published in 1859, twenty years after the boat had docked, and then only under duress as Alfred Wallace was hot on the same trail.
Gregor Mendel published his important work on the transmission of traits in a little known journal. His work only saw the light of day years later when the rest of the world had basically caught up with his ideas.
Both of these individuals, and many others of their day, were methodical, thoughtful, and not in a rush to publish. If Darwin had been alive today, he would have had pressure to put out several papers before he even got off of the ship. His granting agency would have expected him to “meet milestones,” “accomplish outcomes,” and fill out innumerable Gantt charts on a semi-quarterly basis. He would have spent most of his days responding to emails rather than collecting specimens.
Mendel’s supervisor would have been asking him “why on earth would you want to publish in that journal?” And the editor of the high-impact journal that received his work probably would have written back “Peas? Are you serious?”
But without the methodical research of the past β and by “past” we barely have to go back much more than a decade or so to see slower science β where would be we today? Does our newly hyper-caffeinated research world really work better than the more contemplative system of Mendel, Wallace, and Darwin? Is there some happy medium that we can all agree on?
I would argue that things are starting to change. Just like the music industry was finally forced to change in recent years, technology is going to force academia to change as well. In great part this is due to the rise of open access journals. These journals β such as offerings from PLoS, eLife, PeerJ, F1000 Research, and Ecosphere β are changing the publishing landscape. And the academic world will have little choice but to move along in step. Thankfully, much of the academic rank and file is quite happy to jump on board this particular train. Besides offering research results β which were likely paid for with public money β to the public for free, these journals also offer article-level metrics. That means that instead of a journal-wide impact factor, each article can be assessed by the number of downloads and/or citations. Many of these journals also promise to publish all rigorous research that passes peer review no matter how “sexy” it seems to be at the moment. So, if someone takes the time and effort for careful research on pea genetics, they can get it published even if much of the world currently could care less about peas. The crowd gets to decide β either immediately or over time β if the findings are worth the electrons that light up the pixels on their devices.
It is starting to look like this is another case of “the more things change, the more they (return to) the same.” Just as it seemed that letter writing was dying, in came email. And now, just as it seems that contemplative science and judgement of the merit of single works were going out the window, along comes the open access publishing paradigm.
These open access endeavors deserve our support. And I am looking forward to seeing where this takes us in the coming years.
Nice post, and I agree with what your saying.
My frustration with impact factors is that they’re not really measuring impact; at best, it’s just an intermediate step along the way. As an applied researcher (and admittedly one without a particularly high ‘h-index’), if my research has a direct effect on policy but isn’t published, then by academic standards it hasn’t had an “impact”. However, if I publish in a journal that no policy maker knows exists, than apparently my research has had an impact. And if I manage to publish in a highly technical theoretical journal, than apparently my research has had a really big impact.
It seems that we, as academics, are becoming very good at doing the things that can be measured, but not necessarily the things that actually matter.