“I am sick of impact factors and so is science.”
Stephen Curry said it best back in 2012. The impact factor is just one of the many banes of academia, from it’s complete misuse to being falsely inflated by publishers.
I want to draw attention to a new article that addresses the causes behind this ‘impact factor mania’ that academia has.
The article is quite right to place the blame firmly in the hands of academics. It’s our fault that the impact factor is still misused. No-one else. Almost every academic knows why the impact factor is flawed, but still we use it over and over to assess the quality of a person or an article. It’s irrationality in its most blatant form, and you’d think academics would be smart enough to stop using it. But for some reason, we, as a collective, aren’t.
This article addresses many of the causes behind this persistent misuse, abuse, whatever you want to call it. It’s open access, so you can read it for free. Most importantly, though, share this with your colleagues. Academics who are against the impact factor – you do academia a dis-service by letting this irrational use persist without trying to combat it. A great starting point for ‘conversion’ would be to convey the importance of signing things like the San Francisco Declaration on Research Assessment (DORA).
My main feeling about why the impact factor still carries so much weight is partially due to a combination of fear and respect. Many of our senior colleagues and friends in academia will have ascended to their current positions based largely on assessments where the impact factors of journals they have published in will have er, factored in. No-one, I hope, wants to explicitly say that these senior faculty members have reached their positions based on something that’s effectively meaningless in terms of how ‘good’ at science-ing they actually are. But this is one of the implicit statements made when saying that the impact factor is a false method of assessment.
A personal statement: I will never publish in a journal because of its impact factor. I don’t care, not one little bit. If someone wants to judge my work based on that, they can, but I’ll explain to them why they’re wrong. I’m careful to suggest, however, that other PhD students adopt this stance, as many will still judge your post-doc worthiness based on the IF of journals you’ve published in, and I don’t want you to compromise your future careers. However, if you do choose to follow this stance, then kudos.
Getting rid of the IF is a cultural issue, and requires change from all facets of academia. It starts with engaging your colleagues. So engage your colleagues.
Further reading: Brembs, B., Button, K. and Munafo, M. (2013) Deep impact: unintended consequences of journal rank, Frontiers (link)
Casey Ydenberg (@CAYdenberg)
I’d like to start a trend in which, whenever someone refers to a journal impact factor, everyone throws up their hands and yells “DORA!”
Jon Tennant
Make it so!!
Ross Mounce
The sheer stupidity can’t continue, can it? It seems like everyone in the room knows it’s a stupid game, but they *still* play it. You gotta laugh… The crazy world of academia!
I’d like to see someone (e.g. a biologist) publish *all* their papers from now on with PeerJ. Can you imagine the response of the evaluation committee (stunned perplexion)?
Massive props to the first author to publish 4 manuscripts in a row with PeerJ – that’d make people think about things a bit, I think. Do we *really* need 50,000+ ‘journals’ in a largely digital-only world, is that helpful? Let alone the silliness of ‘impact factor’…
Daniela Franco
I find the quoted article interesting, but there are a few points that I would debate.
The text enlist some of the problems with IF, including the next one.
“Delays in the communication of scientific findings: […] investigators often respond to reviewers by performing additional experiments in an effort to convince journals to accept their papers. The multiple submissions also consume reviewers’ and editors’ time and delay the public disclosure of scientific knowledge. Publication delay slows the pace of science and can directly affect society when the manuscript contains information important for drug and vaccine development, public health, or medical care. ”
I strongly disagree with this statement. First of all, I think it is very important that reviewers challenge the content of a paper, and in return the authors do any additional experiment necessary to defend their thesis, specially when the manuscript contains important information about a drug/vaccine/etc. It is essential for science that scientist get this kind of feedback. I understand that reviewers may be biased or have conflict of interests, but that is a different issue.
Secondly, I fully disagree with the assumption that the faster the news are deliver, the better for the pace of science. The “publish or perish” mentality is the real problem behind “IF mania”; and publishing as fast as possible a hundred papers is by no means best for anyone. If there is something “urgent” to be communicated, it should be a short communication. Otherwise, I think scientist should focus on getting a high quality research paper, instead of publishing it in little pieces to get as many papers as possible. (As a PhD student, would you rather publish a paper per chapter or a single strong paper, given that your subject allows it?).
On a different note, I suggest a different reading of the article. Swap the words IF with school marks/grades and the rewards and academic notoriety with… well, rewards and academic notoriety. It may sound silly, but to me it seems that the core problems is the same. We need to evaluate science (or scientist?) and we turn to the numbers to measure research value, just as we evaluate school students.
Ideally, we want to use grade/marks to assess a student’s understanding and knowledge of a given subject, but they are also used (abused?) to compare and discriminate between students. If the academia world is presumably made of all those students who passed the standardized exams… are we really that surprised that we go back to numbers to perpetuate the only validation system that we know?
In this extreme analogy, advising a scientist not to publish in a highly regarded journal is like asking a student to intentionally not try for the highest possible mark in the exam.
Don’t take me wrong, I fully agree with the sentiment of changing the current system of assessing quality of research and ditch the IF, but blaming the “luxury” journals is misleading. Luxury journals are a consequence, not a cause, of this insanely competitive system. I also fully agree with the “WHAT SCIENTISTS CAN DO” section.
Let’s engage!
Jon Tennant
Hey Daniela, thanks for the insightful comment. Also, hi from ICL! 🙂
I guess the additional tasks part is a very subject specific thing – in palaeontology, there wouldn’t be much point asking to do additional analyses, for example, that could be a whole new paper.
And yeah, I totally agree that the publish or perish mentality is responsible for a lot of the issues within academia. I don’t think publishing fast is the same as publishing more, especially if it’s down to the publishers why there’s so much delay. With journals like PeerJ having an average turn around from submission to publication of 3 weeks, there’s really just no need for massive delays any more. And personally, I’d rather have many strong papers (approx. 1 per chapter) 😉
I don’t think the IF school grades analogy works too well. The fact is, marks are a way of assessing the quality of someone’s work – the impact factor is not. I think it’s unreasonable to assess someone based on anything but the quality of the research articles, which the best way to do is of course by reading them. Anyone in a related field will most likely know of another’s research well already. Not too difficult a request!
If you’re around on Thursday btw, there’s an event at ICL on publishing etc. (link) – happy to chat there 🙂
Jon