Last week, GENETICS published an editorial by Editor-in-Chief Mark Johnston about the influence of the Journal Impact Factor on science and discussed an alternative metric that emphasizes the research experience of the journal’s editors. The following is Mark’s response to some of the feedback he’s received:

In my editorial I proposed a new metric for comparing journals: the “Journal Authority Factor” (JAF), in an attempt to highlight the flaws of the Journal Impact Factor (JIF) and the tendency for hiring, promotion, and funding committees to rely on it as a proxy for candidate quality. The JAF would use the average h-index (a personal citation index) of the journal’s editors as a rough indicator of their scientific experience and expertise.

I’ve received much feedback from readers thanking me for addressing the current status of scientific publishing, but many people remarked that the JAF is not a solution to the problem of the reliance on impact factors.

Of course I agree.

I don’t think we should replace one flawed metric with another (and, for the record, neither does the Genetics Society of America). It is impossible to judge a journal and, by extension, the hundreds or thousands of articles it publishes every year, using a single metric.

I used the JAF as a device to illustrate the difference in research experience between the editors of the top-tier (high impact factor) journals and the editors of community-run journals. Do I think authors should concern themselves with the slight differences in the JAFs of peer-edited journals? No. But I do think the large differences between the JAFs of peer-edited and professionally-edited journals illustrates a significant problem with how the standards of our field are set. The point of the JAF is to underscore that publication decisions at many high impact factor journals are made by professional editors, and when we defer career-changing decisions to these journals, we are in effect giving these editors significant control of not only scientific publishing, but the entire scientific enterprise.

Importantly, I didn’t say, and didn’t mean to imply, that professional editors have somehow “failed” as scientists, or that they are not important contributors to our community. I believe they have an important role to play in science and scientific publishing. I just don’t think they should have such a disproportionately large influence on our fields.

It’s not only science, but individual authors who benefit from having their peers handle the review of their manuscripts. We regularly receive feedback from GENETICS and G3 authors who tell us that they benefit from the careful decisions our academic editors provide. In particular, they value the editors’ guidance, with decision letters that adjudicate and synthesize the reviews, explain the extent of changes or experiments that are required (or not) to make the story compelling, outline how to respond to reviews, and specify which comments are most important to address.

There’s no metric that will solve the problems we’re discussing. We should make hiring, promotion, and funding decisions based on candidates’ merits and promise as scientists. As I argued in a previous editorial (“We have met the enemy, and it is us”), the ultimate solution to this problem is for scientists to change our culture and stop allowing impact factors to weigh so heavily in decisions about who to hire, promote, and fund. My hope is that when we achieve this, as the influence of the impact factor diminishes, the influence of journals with academic editors will increase, not least because journals like GENETICS and G3 are directly accountable to their colleagues, to the field, and in many cases to the scientific societies that represent and advocate for us.

Mark Johnston was the Editor-in-Chief of GENETICS from 2009-2022.

View all posts by Mark Johnston »