How to evaluate our output

It is very good to see that, slowly, the OA battle is being won. There are still a few rough edges to be smoothened out, particularly in the US, but the battle is being won and everybody is aware of the problem and the solutions. What is best, progress is being made. Now we can maybe turn on the heat on a situation which, probably, does not have an easy solution but which, increasingly is a cause of aggravation, intellectual discrimination and ……: the current mechanisms of peer review and the meaning and use of the Impact Factor (IF).

I have written about this before but it is important to continue doing so to create awareness discussion and, frankly, get some momentum on these issues. Two publications together, provide a great deal of perspective onto the problem: Ron Vale’s  Evaluating how we evaluate (Mol. Biol. Of the Cell, 2012 PMID: 22936699) and Leslie Vosshall’s “the glacial pace of scientific publishing: why it hurts everyone and what we can do to fix it (FASEB J 2012 PMID:22935905). I have mentioned the second one here (https://amapress.gen.cam.ac.uk/?p=1022) and both in Twitter. It is high time that we address the matters so clearly raised in these articles, accept our global responsibility for the situation and begin to do something about it. Both articles make some suggestions and I have added a few more. It is not going to be easy nor fast, but something has to give.

One of the common discussions in conversations with people and increasingly in the web is how to evaluate a piece of work, what can substitute the IF. We all know the impact that a publication in NCS (as they are called) can have when applying for a grant or a job, as we all know that this is a con and that while there is little doubt that these publications have good science, some times exceptional science, they are more a measure of marketing ability/power on the side of the authors or, more specifically, the senior author, than of their science. Second tier journals have equally good, and sometimes better, papers than these. It is often argued that the reason for this impact is derived from a need to choose when faced with many equally good candidates and that, in these situations, the IF (the journal) is a good surrogate for the quality of the science. Really? But, I do not want to whinge, the important thing is not just to point out the problem, far too easy here, but to offer solutions.

If the problem is how to judge the scientific potential of people, I would argue that applicants should be asked to submit, in addition to their full publication list, their choice of the three most significant pieces of work that they have to offer with a short paragraph justifying them (which could even be word limited). This surely will do for more interesting reading than a long (or short)  list of publications which may or may not include multi-author papers in NCS or second tier journals. And I feel that members of panels can see through shams. Then, after reading these three one can have some perspective on the applicant and have a look at the full list with some perspective. Ron Vale points out that this is done in HHMI evaluations and it seems to me that it should not be difficult to make it part of the procedure. Ah, and people, we, they, should not be afraid of disregarding the journal label and focus on the work; is this not what we are supposed to do? It is not very difficult….or is it?

What surprise me most of the situation is that we have relinquished our judgement to the taste of certain journals…….