What CEOs Say

[This post was inspired by Joe Esposito’s Scholarly Kitchen piece reporting on a roundtable of startup-CEOs. Thanks Joe!]

At the 2015 STM meeting in Frankfurt just prior to the Book Fair, the last open session of the day was a CEO Roundtable. These were not CEOs of startups, but CEOs and presidents of some of the world’s most established publishing brands:

  • American Chemical Society: Brian Crawford
  • Institute of Physics: Steven Hall
  • Copyright Clearance Center: Tracey Armstrong
  • Elsevier: Ron Mobed
  • Wiley: Philip Carpenter

To facilitate the roundtable, a moderator posed a question, and then all five CEOs responded. There were eight questions.   I will use company names rather than individual names in the readout below. In some cases, the response of a particular CEO was so distinctive that I will quote him or her. Continue reading

Where is the Conversation About Articles?

It has been nearly two decades since The BMJ introduced “Rapid Responses” with HighWire, and nearly as long since Pediatrics introduced “Post Publication Peer Review” (“P3R”). These were the two earliest examples of the use of HighWire’s “eletters” technology. Unlike some other innovations of that time – e.g., “download to PowerPoint” – there were few followers to these leaders in online commenting.   Even after Tim O’Reilly declared that Web 2.0 was upon us – when the readers on the web became writers, and a conversation ensued – only a few HighWire-hosted journals picked up commenting features. Commenting seems everywhere on the web these days but scholarly journals.

Where did the conversation go, and why isn’t it happening in journal sites? Continue reading

What’s the Definition of “Read”?

[This is the third in an occasional series of posts on the results from HighWire’s researcher-interview series. The previous posts in the series were about how researchers locate information on topics new to them – the value of “grey literature” – and whether journal brand was important in selecting what to read – yes and no.]

2002 was a time when people still used the term “e-journal” to refer to scholarly journals that were online. This “e” suggested that not all journals – and certainly not all reading of journals – was done online. But a decade later the “e” was pretty much dropped, since all scholarly journals were online, and most all reading started with an online search or browse.

In 2002, we asked researchers how many journals they read. The answer was generally in the range of 3-5.   But a decade later, we asked the same question and the answer was general in the range of 8-10. People reported they were reading twice as many journals, within the space of a decade.

How could this be?   Continue reading

[Guest Post] What’s Happening to Peer Review?

[Our guest post is from Michael Jubb, who recently received the 2015 ALPSP Award for Contribution to Scholarly Publishing (bio below).  Michael’s 2015 study of peer review is the best catalog of the many experiments going on in this area (links below).  It leads me to conclude that there is a vibrant and vital frontier of experimentation in review processes, from which Michael is a chief correspondent.  — John Sack]

Recent years have seen an increasing pace of experiment and innovation in peer review systems and processes. In a short report earlier this year, I found publishers and others responding in a number of ways to some of the critiques of peer review. The criticisms are various: that peer review is hugely expensive; causes delay in disseminating results; imposes unreasonable demands on authors, reviewers and editors; tends to conservatism in response to innovative and multidisciplinary research; and is prey to unfairness and bias on the part of reviewers. Moreover, the rising numbers of papers amended or retracted post-publication seems to show that peer review is ineffective in preventing the publication of flawed papers.

A recent Taylor and Francis survey of researchers’ perceptions and experiences has been has been criticised by Phil Davis. But shows some interesting gaps between researchers’ expectations of what peer review should deliver, and their experience of what it actually delivers, in areas including checking on methodologies, highlighting omissions, detecting fraud and plagiarism, and improving the quality of articles. Part of the problem here is that while researchers attach great importance to the principle of peer review, there is no common agreement as to what its core purposes are (the survey provided 16 options); and there is thus a bewildering variety of views and concerns about its practice and the extent to which it fulfills those purposes.

The pilots and experiments of recent years thus seek to exploit new technologies to address a number of different concerns, often in tension with each other. They include publishing reviews alongside articles; online posting of pre-prints which are then reviewed before a final version is published; cascading or portable reviews; post-publication comments and ratings on publishers’ and on independent platforms, on blogs and twitter; services enabling researchers to aggregate and gain credit for their reviews; and new peer review services that operate independently of journals. A reasonably comprehensive catalogue and assessment of new services, experiments and pilots is given in my report.

But at the heart of the concerns publishers and others are seeking to address lie the damaging pressures that arise as a result of publishing being used not so much to disseminate new findings, but to confer scholarly credit on the authors. A recent report from the Nuffield Council on Bioethics noted that publishing in high status journals – along with high citation numbers – is still seen as critically important in enabling researchers to gain funding, jobs and promotions. There is thus strong pressure on researchers not only to ‘publish or perish’, but to publish in particular journals.

Such pressures have a profound impact on the scholarly publishing ecology. Some kinds of research – negative findings, replications or refutations of others’ work – may not be published; or published where the chances of widespread recognition are low. The pressures also increase incentives for cutting corners or, worse, misconduct. Publishers and editors must therefore seek to ensure their systems are effective against the publication of sloppy or fraudulent work. They worry about difficulties in recruiting high-quality reviewers; the need for effective guidance and training; and risks to journals’ reputation – even to scholarly publishing as a whole – when as a result of all these pressures, major problems arise, leading to retractions and so on.

But publishers and editors also worry about reviews which make unreasonable demands, or set impossible standards, especially in fast-moving fields. Many are thus seeking to enhance their ability to achieve an appropriate balance between properly rigorous and unduly critical reviews, in six key areas.

First, despite conflicting views from researchers, many publishers are keen to see greater transparency. But they distinguish between revealing reviewers’ identities, and revealing the content of reviews to readers as well as authors, and do not want to be too far in advance of their communities in promoting either.

Second, many publishers want more interaction between editors, reviewers and authors, including engagement with post-publication comments, reviews and ratings. There is increasing take-up or opportunities to comment, usually on sites other than the publisher’s; but there is little interaction, since pre and post-publication activities are independent of each other. Many wish to join them up, provided it does not add significantly to burdens on editors and reviewers.

Third, article-level metrics, which now – through Altmetric, Plum Analytics, Impact Story and others – cover comments and ratings, mentions in social media and news sites, bookmarking and so on. There is debate about how metrics are generated, about weightings and aggregation; but they are an increasingly-important feature of scholarly publishing; and may help to undercut the baleful influence of JIFs.

Fourth, credit and accreditation. Researchers differ as to how much credit can or should be attached to reviews. But publishers, as well as start-ups like Publons, are keen to see proper credit for reviewers. Whether via publisher-specific or third-party services, recognition and credit are likely to be an increasingly-significant part of the landscape.

Fifth, guidance and training, along with feedback and assessment. A UK House of Commons Committee highlighted the importance of these in 2011; and publishers are already making improvements, which are essential if the peer review system is to sustain the confidence of the research community.

Finally, differentiation between the different purposes of peer review. The distinctions are not clear-cut; but the mega-journals show the value of distinguishing between scientific soundness, and some of the many other proclaimed purposes of peer review. This may help reduce redundant effort when papers are submitted successively to several journals. ‘Cascade’ systems are becoming more common, though whether more reviews will be transferred between publishers is not yet clear.

Publishers are concerned not to get too far in advance of, or alienate, their subject communities. Hence innovations, even from new start-ups, are often introduced as pilots or options. Editors play a key role, advising on what is or might not be acceptable. This role will be enhanced as the pace of innovation quickens.

More broadly, we need more debate about kinds of review services that researchers want, and the purposes those services should seek to fulfill. Unless the purposes are defined more clearly, some of the current experimentation may fail to deliver what’s needed.

Dr Michael Jubb 

Michael Jubb has more than forty years’ experience in research policy, funding and administration, as well as scholarly communications, starting from his time as an academic historian, an archivist, and then as Deputy Secretary of the British Academy and Deputy Chief Executive of the Arts and Humanities Research Board. He is the founding Director of the Research Information Network, where he has worked with a range of organisations on issues ranging from researchers’ use of library and information services, their publishing habits, and how they manage and share (or not) the data they create, to the economics of scholarly communications. He was Secretary to the Finch Committee in 2011-12 and was responsible for drafting its report. More recently, he led the team which reported in 2015 on progress in the transition to open access.