What I do when I review

I currently wear a number of hats in terms of academic publishing. I’m an author (and co-author). I am often asked to be a peer reviewer. I am the editor-in-chief of a small, but historical, regional journal. And I am an academic editor – some journals would call a similar position an associate or subject area editor – for PeerJ.

(As an aside, it should be noted that I do not get paid for any of these roles in any way. I, and other scientists, do this type of work because we feel that it is our responsibility to contribute to the process of scientific communication. Without proper and reputable avenues of communication, science would rapidly grind to a lurching halt as we each worked in isolation in our own little realm. The scientific endeavor has always relied on communication, and communication is only accomplished if it is also facilitated.)

Over the next while, in my sporadic blogging fashion, I plan to write down some of my thoughts about each of the roles that I mentioned above. Specially, I would like to write briefly about the mechanics, the philosophy, and perhaps some of the side issues that arise in each role. Today I’m beginning with the role of peer reviewer.

Peer review is the cornerstone of scientific communication. Not all of our communication as scientists is peer reviewed (e.g. most conference presentations or posters are not peer reviewed). But prior to results and analyses being entered into the permanent scientific record in the form of a journal paper, the work is reviewed by two or more independent referees.

The process of peer review has been fairly standard over the years. The reviewers receive the paper (or just the title and abstract) from a journal editor or associate editor. The potential reviewer has a preliminary look at it to see if they are able to review it. If they can and are willing to do the work, they read the manuscript carefully and provide a report back to the editor. Often the report comes with recommendations to accept as is (rare), accept with minor or major revisions, reject with an invitation to resubmit, or to outright reject. The journal editor compares the various reviews from the referees and arbitrates a final decision.

So, what do I do when I receive a request to review a paper? And how do I go about reviewing? Here are some of my thoughts in the form of do’s and don’t’s:

  • Do consider reviewing a paper when a request is sent to you. We are all busy, and sometimes we are so busy that we need to set reviewing tasks aside for awhile. But that cannot and should not be a permanent reality for a practicing scientist. As long as I am submitting papers to journals and am relying on the good graces of volunteer referees, I should be willing to do the work on the other end of the equation as well.
  • Do not review a paper if you are not qualified to do so. If the paper is out of your realm of expertise, don’t pretend that it is. Your review will not be helpful. On the other hand, there might be a reason that the editor asked you to do the review. Perhaps one portion of the paper is highly specialized and in your area. While you might not be qualified to comment on all aspects of the paper, you should be able to comment on that part. The editor should have outlined this to you in their request. But feel free to ask them if you find yourself puzzled by a request.
  • Do not review a paper if you perceive a conflict of interest. Not every editor knows every possible collaborative or collegial arrangement. If you know or suspect that there is a conflict of interest, let the editor know the details and then let them decide on your eligibility to review.
  • Do help the editor, even if you are not going to do the review. If you cannot review a paper for the reasons listed in #2 or #3, provide the editor with some alternate names of colleagues who are qualified to do the work.
  • If you accept the assignment, do be cognizant of the deadline and do your level best to meet it. Nothing is worse from an author’s point-of-view than waiting for months to get the reviews back on their paper. Nothing, that is, except for being an editor having to poke recalcitrant reviewers while also fending off increasingly irate emails from authors.
  • Do follow the journal’s specific reviewing guidelines. Not all journals are the same. All reputable journals expect scientific rigor and appropriate analysis. But there is variation beyond that (e.g., some journals look for “impact” while some do not).
  • Do review as you would wish to be reviewed. In other words, follow the golden rule of reviewing. No author wants to hear the bad news that there are flaws in their analysis or reasoning. But neither does any author want to publish a flawed manuscript. So tell it like it is. Use your expertise. But – and this is an important “but” – be respectful. Whether you are recommending accepting or rejecting the paper, give constructive and useful feedback. Note the positive aspects of the work. Explain what you think could be improved in clear terms. Be as extensive in your comments as you need to be; but never be blunt and brief to the point of being insulting. If you have taken the job on, then do a good job. A good job always entails more than six lines of halfhearted text. Be helpful, be kind, and be honest. In short, be professional.
  • Do be willing to re-review the paper if necessary. If you do end up rejecting the paper or recommending major revisions, let the editor know that you would be willing to have another look at the paper if or when the authors resubmit it. Since you now have some of the best knowledge of the state of the paper, you are among the best placed to assess the recommended changes.
  • Do record your work in your CV. You are not being paid for the work, but it is part of your contribution to the scientific community. Immediately after finishing a reviewing task, record the task (not revealing the authors’ names or other identifying information, of course) in your CV so that you don’t forget about it.
  • Do maintain confidentiality as expected. Most journals still use an anonymous (and sometimes double-blind) review system. That means that you are obliged not to reveal details of your review unless the authors and you both agree at some point. There are a few caveats to this. First, if you wish to sign your review, you can reveal yourself to the authors, but even then, you cannot discuss the details of the review with others. Some journals, such as PeerJ or F1000Research, encourage open peer review. If that’s the case and you choose to abide by that system, then the entire review process will be made public. But, even then, you must maintain confidentiality until the paper and the accompanying reviews are published. Again, be professional.
  • Do look forward to reading the paper in the literature. If you have done a good job of reviewing it, you should take some degree of pride in the outcome because you have had a (hopefully) positive influence on the direction of science.

Peer reviewing takes time and effort, but it is also a rewarding experience. Besides allowing scientists another avenue for participation in the scientific process, it also exposes us to new ideas and cutting-edge thinking. And, above all, it ensures rigor in the scientific record. So enjoy the work, learn from it, and take pride in doing a good job.

Happy birthday, PeerJ

A quick post to note PeerJ‘s first birthday.

PeerJ is a biological open access journal – backed by an excellent publishing team, an advisory board replete with luminaries, and a diverse editorial board – that also happens to come with some interesting twists that are bound to change the scientific publishing paradigm.

First, instead of paying an open access publishing fee for each paper that is accepted, authors each pay a lifetime membership fee (paid memberships start at US$99). If you and your co-authors have a membership, you can publish in PeerJ. In order to keep up your membership, you need to regularly participate in journal activities such as editing, reviewing, or commenting on articles. In other words, with one membership you can publish open access articles in PeerJ for life.

That, in itself, is a twist that makes PeerJ unique.

The second twist – and the one that I’d like to briefly focus on here – is PeerJ PrePrints.

A preprint is a not-yet-peer-reviewed version of a manuscript that is placed on a public server for early dissemination to the rest of the scientific community. Preprints serve to provide early access by other researchers to data, results, and interpretations. They allow for pre-review discussion and criticism of the ideas that, if taken to heart by the authors, serve to strengthen the manuscript for eventual peer review and publication. And, when uploaded to a recognized preprint service, preprints set a date-stamped precedent for the ideas that they contain. To great extent, a preprint is simply a conference presentation or poster in formal manuscript form with broader access and better DOI-based citation/recognition.

Physicists, astronomers, computer scientists, and mathematicians (to name a few) have dealt in preprints for many years now. For some reason, the biological sciences have languished behind in this regard. But things are changing. Rapidly.

And PeerJ has played a major role in that change over the past year.

As of this post, there are 29 PeerJ PrePrints at the journal site, some of which are in their V.2 or V.3 forms (yes, you can update your preprint as you receive comments, etc.). That list is bound to grow in the coming years.

Keep an eye on PeerJ. It’s going places. I’m hoping that my lab will soon submit a few preprints and journal articles, and I hope that you are considering it as well.

NOTE #1: While the world of biological academic publishing is changing in regard to preprints, there are still some hold-out journals which either have ambiguous policies or which flat-out reject papers that have been published as preprints. You can use these tools – here and here – to make decisions regarding preprinting of your upcoming manuscript.

NOTE #2: At the membership link, you’ll have noticed that there is a free membership that allows you to submit one public PeerJ Preprint per year. So it’s a great way to try out the system without spending a single dime.

Open data

by Dezene Huber and Paul Fields, reblogged from the ESC-SEC Blog.

Have you ever read a paper and, after digesting it for a bit, thought: “I wish I could play with the data”?

Perhaps you thought that another statistical test was more appropriate for the data and would provide a different interpretation than the one given by the authors. Maybe you had completed a similar experiment and you wanted to conduct a deeper comparison of the results than would be possible by simply assessing a set of bar graphs or a table of statistical values. Maybe you were working on a meta-analysis and the entire data set would have been extremely useful in your work. Perhaps you thought that you had detected a flaw in the study, and you would have liked to test the data to see if your hunch was correct.

Whatever your reason for wishing to access to the data, and this list probably just skims the surface of the sea of possibilities, you often only have one option for getting your hands on the spread sheets or other data outputs from the study – contacting the corresponding author.

Sometimes that works. Often times it does not.

  • The corresponding author may no longer be affiliated with the listed contact information. Tracking her down might not be easy, particularly if she has moved on from academic or government research.
  • The corresponding author may no longer be alive, the fate of us all.
  • You may be able to track down the author, but the data may no longer be available. Perhaps the student or postdoc that produced it is now out of contact with the PI. But even if efforts have been made to retain lab notebooks and similar items, is the data easily sharable?
  • And, even if it is potentially sharable (for instance, in an Excel file), are the PI’s records organized enough to find it?*
  • The author may be unwilling to share the data for one reason or another.

Molly (2011) covers many of the above points and also goes into much greater depth on the topic of open data than we are able to do here.

In many fields of study, the issues that we mention above are the rule rather than the exception. Some readers may note that a few fields have had policies to avoid issues like this for some time. For instance, genomics researchers have long used repositories such as NCBI to deposit data at the time of a study being published. And taxonomists have deposited labeled voucher specimens in curated collections for longer than any of us have been alive. Even in those cases, however, there are usually data outputs from studies associated with the deposited material that never again see the light of day. So even those exceptions that prove the rule are part of the rule of a lack of access to data.

But, what if things were different? What might a coherent open data policy look like? The Amsterdam Manifesto, which is still a work in progress, may be a good start. Its points are simple, but potentially paradigm-shifting. It states that:

  1. Data should be considered citable products of research.
  2. Such data should be held in persistent public repositories.
  3. If a publication is based on data not included in the text, those data should be cited in the publication.
  4. A data citation in a publication should resemble a bibliographic citation.
  5. A data citation should include a unique persistent identifier (a DataCite DOI recommended, unless other persistent identifiers are in use within the community).
  6. The identifier should resolve to provide either direct access to the data or information on accessibility.
  7. If data citation supports versioning of the data set, it should provide a method to access all the versions.
  8. Data citation should support attribution of credit to all contributors.

This line of reasoning is no longer just left to back-of-napkin scrawls. Open access to long term, citable data is slowly becoming the norm rather than the exception. Several journals have begun require, or at least strongly suggest, deposition of all data associated with a study at the time of submission. These include PeerJ and various PLoS journals. It is more than likely that other journals will do the same, now that this ball is rolling.

The benefits of open data are numerous (Molloy, 2011). They include the fact that full disclosure of data allows for verification of your results by others. Openness also allows others to use your data in ways that you may not have anticipated. It ensures that the data reside alongside the papers that stemmed from them. It reduces the likelihood that your data may be lost due to various common circumstances. Above all it takes the most common of scientific outputs – the peer reviewed paper – and adds lasting value for ongoing use by others. We believe that these benefits outweigh the two main costs:  the time taken to organize the data and the effort involved in posting in an online data repository.

If this interests you, and we hope that it does, the next question on your mind is probably “where can I deposit the data for my next paper?” There are a number of options available that allow citable

(DOI) archiving of all sorts of data types (text, spreadsheets, photographs, videos, even that poster or presentation file from your last conference presentation). These include figshare, Dryad, various institutional repositories, and others. You can search for specific repositories at OpenDOAR using a number of criteria. When choosing a data repository, it is important that you ensure that it is backed up by a system such as CLOCKSS.

Along with the ongoing expansion of open access publishing options, open data archiving is beginning to come into its own. Perhaps you can think of novel ways to prepare and share the data from your next manuscript, talk, or poster presentation for use by a wide and diverse audience.

—–

* To illustrate this point, one of us (DH) still has access to the data for the papers that stemmed from his Ph.D. thesis research. Or at least he thinks that he does. They currently reside on the hard drive of the Bondi blue iMac that he used to write his thesis, and that is now stored in a crawlspace under the stairs at his house. Maybe it still works and maybe the data could be retrieved. But it would entail a fair bit of work to do that (not to mention trying to remember the file structure more than a decade later). And digital media have a shelf life, so data retrieval may be out of the question at this point anyhow.

Whither peer review?

If you’ve been working in science long enough to have published at least one or two papers, you are already well-acquainted with certain aspects of the process:

  • Our current system of anonymous peer review has been a resounding success in terms of furthering the scientific endeavor.
  • Anonymous peer review has been around for a long time now and has carved itself a firm niche within academic culture.
  • A good reviewer is worth their weight in gold (or ink?). Their suggestions, even when graciously rejecting your article, can be used to strengthen the work for eventual publication.
  • Thankfully, most reviewers are good reviewers. Most take the time to carefully and thoughtfully train their lens of critical expertise on the submissions that they receive. In most cases, the eventual published products benefit from the (usually mainly unrewarded) referee’s effort.
  • A poor reviewer, on the other hand, is one of the most aggravating people that you will ever encounter. Poor reviewers take many forms. There are the ones that seem to have not read your paper in the first place and ask questions about things that are explicitly mentioned in your submission. There are those who seem to have an agenda, either scientific or otherwise, and who wear that agenda on their lab coat sleeve. And there are those who obviously don’t have the time or inclination to give a proper review and so either cursorily reject (usually) or accept your paper but who offer no helpful advice in their five-sentence paragraph to the editor. There is no real recourse for response; no real opportunity for dialogue. The review is the review is the review. Good, bad, ugly, or very ugly.
  • The system can be slow, not necessarily because of careful consideration by reviewers, but simply because a manuscript can sit for weeks or months on someone’s desk before they get reminded the seventeenth and final time by the journal editor to complete the review.
  • No one has ever received tenure or promotion on the basis of their careful and fair reviews of others’ articles. Conducting reviews is vital to the ongoing work of science,  but is a generally thankless job.

There are any number of peer review horror stories out there. Some of them are real. Some of them stem from the fact that nobody likes to get their work rejected. So it’s tempting to ascribe villainous motives to the anonymous reviewer(s) who stopped your article in its tracks. It is often hard to differentiate a legitimate beef from sour grapes.

Sir Winston Churchill is reputed to have said, “(i)t has been said that democracy is the worst form of government except all the others that have been tried.” And the same might be said for anonymous peer review. The fact of the matter is that peer review has served science well and continues to do so to this day. But that doesn’t mean that the current system is the pinnacle accomplishment of the scientific publishing process. Life evolves. Culture evolves. Technology evolves.

To stretch the evolutionary analogy, are we witnessing something akin to directional selective pressure on the anonymous peer review process? If so, where is the process being pushed? Are there better forms of reviewing that we have not yet tried because, until recently, our technology would not permit them? As technology changes, will peer review also change and become better – both for the scientists involved and for the furthering of our scientific knowledge in general?

Along with the recent discussion about more open science  and more “crowd” involvement in the process, we are also hearing some interesting ideas about changes to the review process. One such idea was recently presented by James Rosindell and Will Pearse at the PLoS Biologue blog:

Peer review is an essential part of science, but there are problems with the current system. Despite considerable effort on the part of reviewers and editors it remains difficult to obtain high quality, thoughtful and unbiased reviews, and reviewers are not sufficiently rewarded for their efforts. The process also takes a lot of time for editors, reviewers and authors.

And their solution:

We propose a new system for peer review. Submitted manuscripts are made immediately available online. Commissioned and/or voluntary reviews would appear online shortly afterwards. The agreement or disagreement of other interested scientists and reviewers are automatically tallied, so editors have a survey of general opinion, as well as full reviews, to inform their decisions.

…

In our proposed system, users would log into the system and get the opportunity to vote once for each article (or reviewers comment), thereby moving it up or down the rankings. Access could be restricted to those within the academic world or even within an appropriate discipline, so only appropriately qualified individuals could influence the rankings. The publication models of established journals would be preserved, as full publication of an article can still take place once the journal is satisfied with the scientific community’s reception of the work.

There are certainly attractive elements to this idea. First, of course, is the idea of online publication of what amounts to being a preprint. This gives the authors official priority and it gets the results out to the community as soon as possible. It also allows some semblance of “democratization” as the review process would no longer be a one-way street. And, of course, it forces reviewers to be responsible for their comments and decisions; the lack of such accountability being one of the biggest issues with the system of anonymous peer review.Referees would also receive explicit credit for their good, and not-so-good, reviews. A great reviewing track record may be the sort of thing that could actually be rewarded within the academy. There would be a real incentive to conduct good reviews.

However, I have concerns as well. Just as with “liking” on Facebook, this has the potential to become a popularity contest. And science is not about popularity. It is about truth. And truth can come from unpopular sources. There is also the likelihood that some researchers in highly competitive fields will only sign on to such a system with extreme reluctance due to the fear of being scooped.

Beyond that, would already overworked researchers really take quality time to thoughtfully comment on preprints? And, would there be ways to game the system, analogous to people trying to increase their search engine rankings? Finally, what about small and boutique journals? The authors of the new peer review proposal envision a marketplace where editors bid for articles within the ranking system. As the editor of a small, regional journal, I am worried about what would happen to journals like the one that I oversee. Would we be able to win bids for quality papers? Or would we get lost in the shuffle after over 100 years of service to the scientific community?

As with the shifts that are occurring with the move toward open access and away from impact factors, I am positive that peer review will also have to change. And it’s good to see that people are thinking about how those changes will come about. Hopefully some of the various concerns with the intended and unintended consequences of changing the system will also be thoughtfully considered. There’s nothing wrong with moving quickly as long as you apply the brakes appropriately around the corners.

A quick post script: It should be noted that the peer review process is not a monolithic edifice of utter similarity across the board. Some journals (e.g., BMJ) have been practicing open peer review for quite some time now. And some new journals (e.g. PeerJ) are also pushing into new territory on this front.

Slow science

I have an admission to make. All the way through my Ph.D. studies and on into my first postdoctoral stint, I had no idea what an impact factor was. I still remember my first encounter with the concept. A number of fellow postdocs and students were discussing which journal a particular paper that one of them was working on should be sent to. After a bit of listening (and probably nodding along cluelessly with the discussion) I found a computer and looked it up. Most of you reading this probably know what it is. But, for the record, it is a measure of how many times recent articles in a given journal are cited compared to recent articles in other journals. And this is supposed to allow ranking of a specific journal’s importance compared to others. Of course, this whole endeavor is fraught with problems. But even so, it’s become well nigh impossible to hold an extended conversation about academic publishing with a group of scientists without impact factor considerations coming up.

I have another admission to make. Until I began the process of applying for tenure awhile back I had never heard of an h-index. Suddenly I found it was as vital to my academic life as is the oxygen level in my blood to my real life. So, off I went to Google Scholar where I found that not only was my (decent, but somewhat modest) h-index calculated for me, but so was my i10-index. I hesitate to bore you with details, but in case you don’t know what these are and really need the information here you go…

To calculate your h-index, put your papers in order from most cited to least cited. Then count down the ranked papers from top to bottom until you get to the last point where a paper has at least as many citations as its rank on the list. That is your h-index.

An i10-index is simpler – it’s the number of your papers with at least 10 citations.

Both of these are influenced by age or, more precisely, academic age (how long you’ve been in the game) and by how much other people make use of your findings in their own work.

To a science outsider these measures might sound a bit odd. But despite their issues they are now the standard for how university administrators, granting agencies, and others judge academic work. For better or for worse scientists and their publications are now part of a Google-sized numbers game.

Is it in the best interests of science, and society, that measures like this are the yardsticks used to judge scientific worth? Joern Fischer, Euan Ritchie, and Jan Hanspach argue a persuasive “no” to that question in a short opinion piece in TREE (27:473-474) entitled “Academia’s obsession with quantity.” They explain that, among other things, the quantity obsession is concentrating huge amounts of resources among a small cadre of large research groups. And the push for speedy publication in high-impact journals is forcing a focus on fast and shallow rather than reflective thought, deep experimentation, and patient observation. Careful lab research and long-term field studies are taking a back seat to expedient and efficient – but ultimately less satisfying – answers. Beyond that, and arguably more importantly, the love of indices is hurting the families and other relationships of academics.

To quote Fisher et al.: “(the) modern mantra of quantity is taking a heavy toll on two prerequisites for generating wisdom: creativity and reflection.”

Charles Darwin’s voyage on the Beagle lasted from 1837 to 1839. “On the Origin of Species” was published in 1859, twenty years after the boat had docked, and then only under duress as Alfred Wallace was hot on the same trail.

Gregor Mendel published his important work on the transmission of traits in a little known journal. His work only saw the light of day years later when the rest of the world had basically caught up with his ideas.

Both of these individuals, and many others of their day, were methodical, thoughtful, and not in a rush to publish. If Darwin had been alive today, he would have had pressure to put out several papers before he even got off of the ship. His granting agency would have expected him to “meet milestones,” “accomplish outcomes,” and fill out innumerable Gantt charts on a semi-quarterly basis. He would have spent most of his days responding to emails rather than collecting specimens.

Mendel’s supervisor would have been asking him “why on earth would you want to publish in that journal?” And the editor of the high-impact journal that received his work probably would have written back “Peas? Are you serious?”

But without the methodical research of the past – and by “past” we barely have to go back much more than a decade or so to see slower science – where would be we today? Does our newly hyper-caffeinated research world really work better than the more contemplative system of Mendel, Wallace, and Darwin? Is there some happy medium that we can all agree on?

I would argue that things are starting to change. Just like the music industry was finally forced to change in recent years, technology is going to force academia to change as well. In great part this is due to the rise of open access journals. These journals – such as offerings from PLoS, eLife, PeerJ, F1000 Research, and Ecosphere – are changing the publishing landscape. And the academic world will have little choice but to move along in step. Thankfully, much of the academic rank and file is quite happy to jump on board this particular train. Besides offering research results – which were likely paid for with public money – to the public for free, these journals also offer article-level metrics. That means that instead of a journal-wide impact factor, each article can be assessed by the number of downloads and/or citations. Many of these journals also promise to publish all rigorous research that passes peer review no matter how “sexy” it seems to be at the moment. So, if someone takes the time and effort for careful research on pea genetics, they can get it published even if much of the world currently could care less about peas. The crowd gets to decide – either immediately or over time – if the findings are worth the electrons that light up the pixels on their devices.

It is starting to look like this is another case of “the more things change, the more they (return to) the same.” Just as it seemed that letter writing was dying, in came email. And now, just as it seems that contemplative science and judgement of the merit of single works were going out the window, along comes the open access publishing paradigm.

These open access endeavors deserve our support. And I am looking forward to seeing where this takes us in the coming years.