In April & May 2015 the Royal Society held a two-part conference on scholarly scientific communication. Before the summer ends I want to write my impressions of the first part of the conference, which was largely about peer review. There is important material from this conference, for editors and societies who are considering editorial changes as they go into the fall cycle of board meetings.
The conference was notable in that the Royal Society invited delegates from all the types of stakeholders in the “ecosystem” of scientific communication. So this was not at all the typical “publishers-only meeting”. Of course there were publishers present, along with journal editors and researchers at various career stages. But there were also representatives from funders and from institutions, from technology and commercial as well, along with experts in the history of science. The mix was cross-disciplinary as well: physics, biology, chemistry etc. (The historian just mentioned is Aileen Fyfe of St. Andrews. She provided some commentary from outside the sciences. Prof. Fyfe could remind us how “modern” peer review came about, and what its methods were designed to do – but also that complaints about the process are not something new to the last 50 years.)
At the end of this post, I’ll provide pointers to the conference details, including a summary and audio recordings. But first, the highlights:
Going into this meeting, I had observed that some of the most interesting things happening in the publishing ecosystem are happening “upstream” from the published-journal web-site: they are happening in the peer-review workflow. There’s plenty of evidence for this: peer review changes and experiments going on at BMJ, at eLife, at PLOS, at the Royal Society, at Cold Spring Harbor Labs, at Faculty of 1000, etc. “But wait, there’s more” as they say: the Royal Society pointed attendees to a background paper written by the Research Information Network and commissioned by the Wellcome Trust: Scholarly Communication and Peer Review – The Current Landscape and Future Trends. This 30+ page paper points to a lot of the experiments and trends. If you or your editors are planning experiments this fall, the paper is worth a run through.
My major takeaway from this meeting is that there was surprising consensus – perhaps even a sense of inevitability – that the practice of posting dfpreprints would address many of the problems in science publishing, particularly in biomedical sciences. (The practice being long-established in fields of physics.) Preprints (like ArXiv in physics, and bioRxiv from CSHL in life sciences) actually changes the “upstream/downstream” dynamic that I mentioned above: in traditional review models, evaluation precedes distribution; but preprint availability lets distribution precede evaluation. So many of the problems with bias and delay are mitigated by distribution (availability) coming ahead of the review filter. This lets expert readers tap into an information stream, which they can filter for themselves.
Experts doing their own filtering has come up before in HighWire’s work. In researcher interviews that we conducted in 2014, we saw some conflicting commentary: readers were telling us that journal brand was important to identifying articles to read, but they also told us is was irrelevant – sometimes the same researcher told us both. When we pursued this, we found the key, handed to us by a neuroscience postdoc (to paraphrase): “When I’m reading an article in an area in which I’m expert, I don’t really care where it is published, and don’t need peer review – I can do my own review; for articles outside my expertise, I rely on other experts to review it first.”
This is a clear argument in favor of preprint servers: they get articles in front of all the potential expert readers fast. To borrow a phrase from the conference, preprints don’t “impede science”. They don’t polish it either.
This consensus for preprint servers emerged in the morning discussion on the second day of the conference. I don’t recall seeing Harold Varmus at the second day of the meeting (he was at the first day) – if he were there he might have been bemused recalling the horrified reaction to his “E-biomed” preprint server proposal in 1999!
For further reading:
The summary report from the conference is extensive and well-edited. Pages 8-10 are about the peer review discussion. The full four-day meeting agenda is also online, along with links to audio files for those who really want the play-by-play!
Prof. Fyfe and I will be joined by colleagues — John Inglis, who heads Cold Spring Harbor Labs; Dr Simon Kerridge, Director of Research Services at University of Kent and Chair of the Association of Research Managers and Administrators; and Dr Kirsty Edgar, Leverhulme Early Career Research Fellow, at the School of Earth Sciences, University of Bristol — at the upcoming ALPSP meeting at Heathrow for a panel discussion of Peer Review: Evolution, Experiment, and Debate, on Friday morning, 11 September 2015.