Twitter JAM

I just returned home last night after spending a few days in Edmonton at the Joint Annual Meeting of the Entomological Society of Canada and the Entomological Society of Alberta. It was a well-organized meeting with lots of great talks and posters. And, of course, lots of time to reconnect with colleagues from other universities.

A number of entomologists at the meeting, including myself, have Twitter accounts, so we “live tweeted” some of the sessions that we attended. The conference hashtag was #ESCJAM2012, in case you want to take a look at the Twitter record of the event.

From my perspective, live conference tweeting was generally a positive experience, although I say that with a few caveats. Here are my brief thoughts on the Twitter JAM:

1. I enjoyed being able to read about what was going on in other concurrent sessions. My fairly packed schedule this year did not give me much leeway to move from session to session. With so many concurrent sessions, I would have ended up missing interesting talks regardless. So it was good to have at least a taste of what was going on elsewhere. Some of the conference tweets encouraged me to talk to others about research presentations that I didn’t get to attend.

2. I can imagine how this practice is useful for professional and citizen scientists who are not able to attend a meeting. I know that if I were not at the #ESCJAM2012, I would have been following along from my office desk. I plan to virtually attend conferences like this in the future.

3. I noticed that live tweeting can be distracting in a number of ways. First, I often worried that I was causing distraction to neighbors when I would pull out my iPad to compose and send a tweet during a talk. Although I tried to sequester myself near the back edges of rooms (not great for face-to-face networking), I would sometimes get glances when my iPad lit up. Second, the act of composing and sending a tweet distracted me for a few moments from what was going on up front. There were a few times that I knew that I had missed an important point. And third, I know that a few of my followers found the stream of insect tweets to be a bit of a hassle. None of these are insurmountable, but all are issues that we need to be aware of.

4. Some tweets are better than others, including tweets at a scientific conference. Was every one of my tweets useful? I doubt it. Did every one of my tweets fairly represent the talk that I was listening to? Is that even possible in 140 characters? Obviously not. As Marshall McLuhan famously intoned, the medium is the message. Ultimately, is Twitter the best medium for science?

5. To expand on point #4, the best tweets were the ones that contained added value. A great example of this was a “toy” built by David Shorthouse that “caught” tweets with the #ESCJAM2012 hashtag and a species name and then pulled up a bunch of related references.

This is but one example of how Twitter can, in fact, punch above its 140 character weight.

In a much less technical fashion, in one or two instances I dug up new or classic papers related to a presentation and provided the URL(s) in a tweet.

Of course, that whole process took even longer than a regular tweet because I had my nose buried in Google Scholar; so we’re back at point #2. Some form of automation, perhaps similar to that also envisioned by David, could do what I did more effectively without me actually having to poke away at my iPad while only partly paying attention to someone who spent a lot of time putting together a good presentation.

6. Science is becoming more and more open, and that is a good thing. Journal articles and conferences were originally intended to increase the flow of information, ideas, and data. For many, many years both have done just that. But the web-connected world means that those vehicles don’t always do that as well as they used to in their fully traditional form; nor do they do it as well as they could considering the available technology. Just as paywalls at journal sites act to slow the flow of information compared to innovative open access options, conference travel and fees represent a paywall as well. We now have the technology to tear down those conference walls so that all of our colleagues and the general public can benefit and build on our ideas. Twitter might be part of the paywall wrecking crew, at least in the near term.

7. What if each session at a conference had a designated tweeter (DT)? Sessions already have a moderator and a projectionist, and I can imagine adding a DT to that mix. Each DT in each concurrent session would tweet into one unified conference account (e.g. @ESCJAM2012). Each session would have its own separate hashtag (e.g. #ESCforestry, #ESCbiodiversity, #ESCevolution, #ESCecology). The choice of DT for a session would be based on their interest and expertise in order to make the tweets as relevant as possible. In other words, thought would go into the choice of a session DT; the DT wouldn’t necessarily be the first available volunteer Others in the sessions would be encouraged to participate as well, but general participants would not feel like the tweeting burden was on them. General participants could maintain good focus – why even meet in person if your nose is in your device half of the time? – and could tweet from time-to-time if they felt a reason or had the expertise add value to the online conversation. But whatever the general participants decided to do, the session would be broadcast in an effective manner by an engaged and expert DT.

Do you have other thoughts on this practice? Where do you see this going in the future? Is live tweeting simply a road stop on the way to standardized full broadcasts of conferences? What, if anything, does tweeting bring to the table that is missing from face-to-face interaction or that couldn’t be realized through other non-electronic means? What hesitations to you have about this practice? How has live tweeting been a benefit to you or to others who you know?

Live tweeting, or something like it, seems to be the direction that we’re heading. It’s time for some frank discussion about the best ways to make scientific conferences more open to all. So tweet away!

The rise of biological preprints

Although I’m not particularly long-in-the-tooth, for my entire scientific life I have known that publishers (at least in my field) do not accept papers that have been published elsewhere. And while workers in fields like mathematics and physics have long been able to post preprints of their work prior to peer review and subsequent publication in a journal, researchers in the biological sciences have generally not been allowed to do that. This is because most, if not all, journals that accept biological research manuscripts have historically considered posting a preprint as prior publication. And papers that have been previously published are, rightly, persona non grata in reputable journals.

This “prior publication” attitude toward preprints is a pity because such posting has many upsides (outlined in detail here and here and here) and very few downsides. As an editor of a small journal, and a regular reviewer for a large number of other journals in my field, I can attest to the fact that posting to such a service, in which members of the community can comment and critique an article prior to review, would have helped to strengthen just about every manuscript that has ever come across my desktop.

Some of the biggest advantages of preprint posting that I can see are:

Increased community involvement in the scientific process: Scientists at all levels would be able to take part in reading, processing, and commenting on others’ work. Amateurs would also have access to the process and could provide their often-valuable input as well. That would build community, connections, and collaborations. And that would, in turn, help to strengthen and improve the scientific endeavor in general.

Providing authors with valuable feedback and allowing them to improve their work prior to a formal review: As an editor and reviewer I understand quite intimately the (generally thankless) time and effort that it takes to process an article from first submission to final publication. As an author, I know what it feels like to have the “reject” button pressed on a study that I have invested blood and sweat into. In both cases, prior thoughtful advice and critique from the larger community would help to make the formal process smoother.

Results become visible and public more rapidly: Again, as an editor, I know how long it can take for a paper to move from submission to publication. While some traditional journals have done their best to speed things along in recent years, we all have stories of papers that have languished for eons on some editor’s or reviewer’s desk, holding up the publication of the work for even years. Preprint posting does an end-around, allowing the work to be seen immediately and reducing the irritation that slow processing by a journal might cause. The rest of the scientific community would have access to results that may improve research in other labs or even other fields prior to official acceptance and formal publication.

Less fear about being scooped: I’m thankful that my area of biology generally moves at merely a moderate clip. I’m also thankful that, in general, colleagues in my field are much more willing and eager to collaborate than to compete. However, I’m fully aware that not all fields are like this. In those fields, researchers rightly worry about another lab beating them to the punch. Preprint posting, as it is fully public, would give a researcher a claim to precedence that could be fully validated as necessary. Personally, I see this is the least important of the reasons for posting to a preprint server. But I understand that it is a consideration for many.

In the last little while many major publishers have changed their tune on this. Most recently that included the stable of journals held by the Ecological Society of America. In addition, a new kid on the block, PeerJ, is going to run a preprint service as a part of its overall open access journal offering. This is a trend that is being welcomed by many in the field. And it’s one more example of how scientific publishing is necessarily changing – I think for the better – as it is stretched by new technologies and concomitant new ways of doing things.

Whither peer review?

If you’ve been working in science long enough to have published at least one or two papers, you are already well-acquainted with certain aspects of the process:

  • Our current system of anonymous peer review has been a resounding success in terms of furthering the scientific endeavor.
  • Anonymous peer review has been around for a long time now and has carved itself a firm niche within academic culture.
  • A good reviewer is worth their weight in gold (or ink?). Their suggestions, even when graciously rejecting your article, can be used to strengthen the work for eventual publication.
  • Thankfully, most reviewers are good reviewers. Most take the time to carefully and thoughtfully train their lens of critical expertise on the submissions that they receive. In most cases, the eventual published products benefit from the (usually mainly unrewarded) referee’s effort.
  • A poor reviewer, on the other hand, is one of the most aggravating people that you will ever encounter. Poor reviewers take many forms. There are the ones that seem to have not read your paper in the first place and ask questions about things that are explicitly mentioned in your submission. There are those who seem to have an agenda, either scientific or otherwise, and who wear that agenda on their lab coat sleeve. And there are those who obviously don’t have the time or inclination to give a proper review and so either cursorily reject (usually) or accept your paper but who offer no helpful advice in their five-sentence paragraph to the editor. There is no real recourse for response; no real opportunity for dialogue. The review is the review is the review. Good, bad, ugly, or very ugly.
  • The system can be slow, not necessarily because of careful consideration by reviewers, but simply because a manuscript can sit for weeks or months on someone’s desk before they get reminded the seventeenth and final time by the journal editor to complete the review.
  • No one has ever received tenure or promotion on the basis of their careful and fair reviews of others’ articles. Conducting reviews is vital to the ongoing work of science,  but is a generally thankless job.

There are any number of peer review horror stories out there. Some of them are real. Some of them stem from the fact that nobody likes to get their work rejected. So it’s tempting to ascribe villainous motives to the anonymous reviewer(s) who stopped your article in its tracks. It is often hard to differentiate a legitimate beef from sour grapes.

Sir Winston Churchill is reputed to have said, “(i)t has been said that democracy is the worst form of government except all the others that have been tried.” And the same might be said for anonymous peer review. The fact of the matter is that peer review has served science well and continues to do so to this day. But that doesn’t mean that the current system is the pinnacle accomplishment of the scientific publishing process. Life evolves. Culture evolves. Technology evolves.

To stretch the evolutionary analogy, are we witnessing something akin to directional selective pressure on the anonymous peer review process? If so, where is the process being pushed? Are there better forms of reviewing that we have not yet tried because, until recently, our technology would not permit them? As technology changes, will peer review also change and become better – both for the scientists involved and for the furthering of our scientific knowledge in general?

Along with the recent discussion about more open science  and more “crowd” involvement in the process, we are also hearing some interesting ideas about changes to the review process. One such idea was recently presented by James Rosindell and Will Pearse at the PLoS Biologue blog:

Peer review is an essential part of science, but there are problems with the current system. Despite considerable effort on the part of reviewers and editors it remains difficult to obtain high quality, thoughtful and unbiased reviews, and reviewers are not sufficiently rewarded for their efforts. The process also takes a lot of time for editors, reviewers and authors.

And their solution:

We propose a new system for peer review. Submitted manuscripts are made immediately available online. Commissioned and/or voluntary reviews would appear online shortly afterwards. The agreement or disagreement of other interested scientists and reviewers are automatically tallied, so editors have a survey of general opinion, as well as full reviews, to inform their decisions.

In our proposed system, users would log into the system and get the opportunity to vote once for each article (or reviewers comment), thereby moving it up or down the rankings. Access could be restricted to those within the academic world or even within an appropriate discipline, so only appropriately qualified individuals could influence the rankings. The publication models of established journals would be preserved, as full publication of an article can still take place once the journal is satisfied with the scientific community’s reception of the work.

There are certainly attractive elements to this idea. First, of course, is the idea of online publication of what amounts to being a preprint. This gives the authors official priority and it gets the results out to the community as soon as possible. It also allows some semblance of “democratization” as the review process would no longer be a one-way street. And, of course, it forces reviewers to be responsible for their comments and decisions; the lack of such accountability being one of the biggest issues with the system of anonymous peer review.Referees would also receive explicit credit for their good, and not-so-good, reviews. A great reviewing track record may be the sort of thing that could actually be rewarded within the academy. There would be a real incentive to conduct good reviews.

However, I have concerns as well. Just as with “liking” on Facebook, this has the potential to become a popularity contest. And science is not about popularity. It is about truth. And truth can come from unpopular sources. There is also the likelihood that some researchers in highly competitive fields will only sign on to such a system with extreme reluctance due to the fear of being scooped.

Beyond that, would already overworked researchers really take quality time to thoughtfully comment on preprints? And, would there be ways to game the system, analogous to people trying to increase their search engine rankings? Finally, what about small and boutique journals? The authors of the new peer review proposal envision a marketplace where editors bid for articles within the ranking system. As the editor of a small, regional journal, I am worried about what would happen to journals like the one that I oversee. Would we be able to win bids for quality papers? Or would we get lost in the shuffle after over 100 years of service to the scientific community?

As with the shifts that are occurring with the move toward open access and away from impact factors, I am positive that peer review will also have to change. And it’s good to see that people are thinking about how those changes will come about. Hopefully some of the various concerns with the intended and unintended consequences of changing the system will also be thoughtfully considered. There’s nothing wrong with moving quickly as long as you apply the brakes appropriately around the corners.

A quick post script: It should be noted that the peer review process is not a monolithic edifice of utter similarity across the board. Some journals (e.g., BMJ) have been practicing open peer review for quite some time now. And some new journals (e.g. PeerJ) are also pushing into new territory on this front.

Slow science

I have an admission to make. All the way through my Ph.D. studies and on into my first postdoctoral stint, I had no idea what an impact factor was. I still remember my first encounter with the concept. A number of fellow postdocs and students were discussing which journal a particular paper that one of them was working on should be sent to. After a bit of listening (and probably nodding along cluelessly with the discussion) I found a computer and looked it up. Most of you reading this probably know what it is. But, for the record, it is a measure of how many times recent articles in a given journal are cited compared to recent articles in other journals. And this is supposed to allow ranking of a specific journal’s importance compared to others. Of course, this whole endeavor is fraught with problems. But even so, it’s become well nigh impossible to hold an extended conversation about academic publishing with a group of scientists without impact factor considerations coming up.

I have another admission to make. Until I began the process of applying for tenure awhile back I had never heard of an h-index. Suddenly I found it was as vital to my academic life as is the oxygen level in my blood to my real life. So, off I went to Google Scholar where I found that not only was my (decent, but somewhat modest) h-index calculated for me, but so was my i10-index. I hesitate to bore you with details, but in case you don’t know what these are and really need the information here you go…

To calculate your h-index, put your papers in order from most cited to least cited. Then count down the ranked papers from top to bottom until you get to the last point where a paper has at least as many citations as its rank on the list. That is your h-index.

An i10-index is simpler – it’s the number of your papers with at least 10 citations.

Both of these are influenced by age or, more precisely, academic age (how long you’ve been in the game) and by how much other people make use of your findings in their own work.

To a science outsider these measures might sound a bit odd. But despite their issues they are now the standard for how university administrators, granting agencies, and others judge academic work. For better or for worse scientists and their publications are now part of a Google-sized numbers game.

Is it in the best interests of science, and society, that measures like this are the yardsticks used to judge scientific worth? Joern Fischer, Euan Ritchie, and Jan Hanspach argue a persuasive “no” to that question in a short opinion piece in TREE (27:473-474) entitled “Academia’s obsession with quantity.” They explain that, among other things, the quantity obsession is concentrating huge amounts of resources among a small cadre of large research groups. And the push for speedy publication in high-impact journals is forcing a focus on fast and shallow rather than reflective thought, deep experimentation, and patient observation. Careful lab research and long-term field studies are taking a back seat to expedient and efficient – but ultimately less satisfying – answers. Beyond that, and arguably more importantly, the love of indices is hurting the families and other relationships of academics.

To quote Fisher et al.: “(the) modern mantra of quantity is taking a heavy toll on two prerequisites for generating wisdom: creativity and reflection.”

Charles Darwin’s voyage on the Beagle lasted from 1837 to 1839. “On the Origin of Species” was published in 1859, twenty years after the boat had docked, and then only under duress as Alfred Wallace was hot on the same trail.

Gregor Mendel published his important work on the transmission of traits in a little known journal. His work only saw the light of day years later when the rest of the world had basically caught up with his ideas.

Both of these individuals, and many others of their day, were methodical, thoughtful, and not in a rush to publish. If Darwin had been alive today, he would have had pressure to put out several papers before he even got off of the ship. His granting agency would have expected him to “meet milestones,” “accomplish outcomes,” and fill out innumerable Gantt charts on a semi-quarterly basis. He would have spent most of his days responding to emails rather than collecting specimens.

Mendel’s supervisor would have been asking him “why on earth would you want to publish in that journal?” And the editor of the high-impact journal that received his work probably would have written back “Peas? Are you serious?”

But without the methodical research of the past – and by “past” we barely have to go back much more than a decade or so to see slower science – where would be we today? Does our newly hyper-caffeinated research world really work better than the more contemplative system of Mendel, Wallace, and Darwin? Is there some happy medium that we can all agree on?

I would argue that things are starting to change. Just like the music industry was finally forced to change in recent years, technology is going to force academia to change as well. In great part this is due to the rise of open access journals. These journals – such as offerings from PLoS, eLife, PeerJ, F1000 Research, and Ecosphere – are changing the publishing landscape. And the academic world will have little choice but to move along in step. Thankfully, much of the academic rank and file is quite happy to jump on board this particular train. Besides offering research results – which were likely paid for with public money – to the public for free, these journals also offer article-level metrics. That means that instead of a journal-wide impact factor, each article can be assessed by the number of downloads and/or citations. Many of these journals also promise to publish all rigorous research that passes peer review no matter how “sexy” it seems to be at the moment. So, if someone takes the time and effort for careful research on pea genetics, they can get it published even if much of the world currently could care less about peas. The crowd gets to decide – either immediately or over time – if the findings are worth the electrons that light up the pixels on their devices.

It is starting to look like this is another case of “the more things change, the more they (return to) the same.” Just as it seemed that letter writing was dying, in came email. And now, just as it seems that contemplative science and judgement of the merit of single works were going out the window, along comes the open access publishing paradigm.

These open access endeavors deserve our support. And I am looking forward to seeing where this takes us in the coming years.