The 2016 Fall HighWire Publishers’ Meeting took place this past September in Washington DC, bringing together representatives from publishers across the spectrum of scholarly communications for three days of presentation and discussion. One of the highlights was a panel – ‘The Many Pragmatic Uses of Impact Vizor’ – where three publishers presented ways in which Impact Vizor has been used to answer questions about their journal program and make more informed strategic decisions. Continue reading
[This post was inspired by Joe Esposito’s Scholarly Kitchen piece reporting on a roundtable of startup-CEOs. Thanks Joe!]
At the 2015 STM meeting in Frankfurt just prior to the Book Fair, the last open session of the day was a CEO Roundtable. These were not CEOs of startups, but CEOs and presidents of some of the world’s most established publishing brands:
- American Chemical Society: Brian Crawford
- Institute of Physics: Steven Hall
- Copyright Clearance Center: Tracey Armstrong
- Elsevier: Ron Mobed
- Wiley: Philip Carpenter
To facilitate the roundtable, a moderator posed a question, and then all five CEOs responded. There were eight questions. I will use company names rather than individual names in the readout below. In some cases, the response of a particular CEO was so distinctive that I will quote him or her. Continue reading
The transformation in discovery – and its consequences – was the topic of the opening keynote at the September 2015 ALPSP Annual Meeting. Anurag Acharya – co-founder of Google Scholar – spoke and answered questions for an hour. That’s forever in our sound-bite culture, but the talk was both inspirational — about what we had collectively accomplished — as well as exciting and challenging – about the directions ahead. Anurag’s talk and the Q&A is online as a video and as audio in parts one and two
This post is in two parts: Part One covered Anurag’s presentation of what we have accomplished. The present post, Part Two, covers the consequences. Anurag has agreed to address questions about this post that readers put in the comments.
Here is my take on the key topics from Anurag’s keynote.
In Part One, I highlighted the factors that have transformed scholarly communication over the last 10-15 years:
- Search is the new browse
- Full text indexing of current articles plus significant backfiles joined with relevance ranking to change how we looked and what we did.
- “Articles stand on their own merit”
- “Bring all researchers to the frontier”
- “So much more you can actually read”
In the Part Two of this post, I cover Anurag’s view of What Happens When Finding Everything is So Easy? Continue reading
“Let no one tell you that ‘Scholarly communication hasn’t changed’”
HighWire conducted its first extensive user studies in 2002. Since then, several things have completely altered the workflow of the researcher:
- full text of most current journal articles is centrally indexed;
- back archives of a significant fraction of the full text research literature is online, and centrally indexed as well.
“Centrally indexed” was a watershed point. In 2002, Google’s web search (i.e., google.com) started indexing the full text of journal literature — including the portion behind paywalls – starting with HighWire and its publishing partners. HighWire saw the use of journal article content go up by one and in some cases two orders of magnitude following this! And then, in 2004 Google Scholar was born, scholar.google.com, recognizing that the workflow and goal of a researcher is not best-supported by a general-purpose internet search engine, no matter how good its ranking algorithms are.
Now, a decade after our first user studies, users report to us that “Finding is easy; reading is hard.”
This transformation in discovery – and its consequences – was the topic of the opening keynote at the September 2015 ALPSP Annual Meeting. Anurag Acharya – co-founder of Google Scholar – spoke and answered questions for an hour. That’s forever in our sound-bite culture, but the talk was both inspirational — about what we had collectively accomplished — as well as exciting and challenging – about the directions ahead. Anurag’s talk and the Q&A is online as a video and as audio in parts one and two
This post is in two parts: the present Part One covers Anurag’s presentation of what we have accomplished. Part Two, to be posted on Monday, October 12, covers the consequences. Anurag has agreed to address questions that readers put in the comments.
Here is my take on the key topics from Anurag’s talk. Continue reading
Two library events this summer – plus librarian interviews – have provided an updated perspective on what’s happening at the intersection of library, technology, research and publishing. There are a lot of moving parts in the research ecosystem, and librarians are acting as consultants on changing research tools and services. In this role libraries are closer than ever to the researchers and their research process themselves – those creating new knowledge. This of course is “upstream” from their role providing access, dissemination, preservation and gateway services to the readers of primary and secondary materials. Being involved with both the producers and the consumers of knowledge creates a cycle that is familiar to publishers, so it is natural that libraries will begin to have some of the same roles that publishers do.
Some had predicted that the libraries would become the academic equivalent of the university procurement office: negotiating deals for content delivered through campus-wide licenses. While that is part of the role of the library, it doesn’t circumscribe the domain much – though this role got emphasized with all the discussion of “the serials crisis”. The activity of the library “upstream” from content delivery is leading HighWire to re-form its Library Advisory Council this fall.
SSP’s librarian focus group session was recently held in Chicago. Four senior librarians participated – Maria Bonn, Michael Levine-Clark, Rick Burke, and Joy Kirchner (detail on their roles is at the end of this post) — with good Q&A with the audience; the session was well moderated by Sara Rouhi.
Rather than try to sum up the one-day event to one or two themes, I’ll let the librarians speak for themselves about key topics. Each of the reported items below is my paraphrase of a statement of one of the four librarians who were members of the panel.
Q: Where do libraries fit into scholarly communications today?
Libraries are now becoming more involved in research activities, not just handling the end product of research.
Scholars are now ‘considering their communication options’, not just publishing manuscripts. This gets into online data sets, annotation, audio, deep hyperlinking, etc. The library is involved in educating about and providing these services.
Researchers ‘typically know how to get grants and where to publish in their field; they don’t know what is going on at the margins’ of scholarly communication. I.e., changing tools and resources.
There was significant discussion about a small but much increasing number of requests for text & data mining. Several of the panelists reported this. This was an eye-opener for me: I participate in CrossRef’s Text & Data Mining work group, and while much progress has been made on the technology, we were hearing little broad interest from researchers. But perhaps this is about to change!
The time-scale/cycle to investigate, learn and use a technology (like Text & Data Mining) has to be a quick one for use in student or teacher projects, because of the short semester or quarter period.
Data – and data curation as a library specialty – was discussed following on the Text & Data Mining topic. ‘Data is a new kind of scholarship – data is a kind of content.’ I think we are seeing this in digital humanities work, with the challenge that each case seems unique leading to technical challenges in publishing and preservation.
Q: How are libraries helping with assessment?
Traditional measures of library activity – ‘gate count’, ‘reference transactions’, ‘journal usage’ and ‘cost per use’ – don’t have much meaning at all for assessment. They don’t link well to outcome measures – student graduation, retention, etc. Perhaps because of privacy concerns, libraries don’t link specific use with outcomes (e.g., are students who use journals more likely to graduate on time?).
Use of some discovery services is going down. This may be appropriate. E.g., as more current content is available, perhaps use of archives should decline.
The library is being asked to assist faculty in finding metrics to help with their dossiers.
Q: Is COUNTER – now over a decade old – still ‘fit for purpose’?
With so many ebooks online now, ‘COUNTER book usage reporting is lacking’ in information that would give you a sense of the use of works.
Q: Do libraries add subscriptions to journals?
Best to quote this directly: “We don’t proactively subscribe to new journals; it has to be a request from a faculty member; the request has to come from the person directly. The reality is that we have to cancel something to start something new. New journals are harder than journals that are established.”
Q: How do libraries work as campus educators?
There was a broad spectrum of areas in which individual libraries had undertaken campus community education: how to pitch a book proposal; how to be an editor; how to use citation and alt metrics; how to use the institutional repository.
There was a good discussion of the necessity of partnering inside the institution to understand and afford some new tools. SciVal, ORCID, Altmetrics were examples. There wasn’t an obvious “home” (to pay) for some of these tools. If the library wants a role it has to invest; but there is disagreement on this with some saying that the library shouldn’t take something on if it can’t afford it.
Q: What about discovery of OA content?
Library tasks – such as cataloging and preservation — that are triggered by or linked to purchase decisions don’t treat OA content. And publishers might be cautious in including their OA content in materials (such as MaRC records or KBART lists) that they send to libraries, since libraries might mistakenly think they had paid for them. So library-driven discovery services might be including all the content the library has paid for, but not all it has access to.
- Maria Bonn, Senior Lecturer, Graduate School of Library and Information Science, University of Illinois
- Michael Levine-Clark, Associate Dean for Scholarly Communication and Collections Services at the University of Denver Libraries
- Rick Burke, Executive Director, SCELC
- Joy Kirchner, University Librarian, York University
- The panel was moderated by Sara Rouhi, Product Sales Manager, Altmetric LLP and member of the SSP Education Committee
In April & May 2015 the Royal Society held a two-part conference on scholarly scientific communication. Before the summer ends I want to write my impressions of the first part of the conference, which was largely about peer review. There is important material from this conference, for editors and societies who are considering editorial changes as they go into the fall cycle of board meetings.
The conference was notable in that the Royal Society invited delegates from all the types of stakeholders in the “ecosystem” of scientific communication. So this was not at all the typical “publishers-only meeting”. Of course there were publishers present, along with journal editors and researchers at various career stages. But there were also representatives from funders and from institutions, from technology and commercial as well, along with experts in the history of science. The mix was cross-disciplinary as well: physics, biology, chemistry etc. (The historian just mentioned is Aileen Fyfe of St. Andrews. She provided some commentary from outside the sciences. Prof. Fyfe could remind us how “modern” peer review came about, and what its methods were designed to do – but also that complaints about the process are not something new to the last 50 years.)
At the end of this post, I’ll provide pointers to the conference details, including a summary and audio recordings. But first, the highlights:
Going into this meeting, I had observed that some of the most interesting things happening in the publishing ecosystem are happening “upstream” from the published-journal web-site: they are happening in the peer-review workflow. There’s plenty of evidence for this: peer review changes and experiments going on at BMJ, at eLife, at PLOS, at the Royal Society, at Cold Spring Harbor Labs, at Faculty of 1000, etc. “But wait, there’s more” as they say: the Royal Society pointed attendees to a background paper written by the Research Information Network and commissioned by the Wellcome Trust: Scholarly Communication and Peer Review – The Current Landscape and Future Trends. This 30+ page paper points to a lot of the experiments and trends. If you or your editors are planning experiments this fall, the paper is worth a run through.
My major takeaway from this meeting is that there was surprising consensus – perhaps even a sense of inevitability – that the practice of posting dfpreprints would address many of the problems in science publishing, particularly in biomedical sciences. (The practice being long-established in fields of physics.) Preprints (like ArXiv in physics, and bioRxiv from CSHL in life sciences) actually changes the “upstream/downstream” dynamic that I mentioned above: in traditional review models, evaluation precedes distribution; but preprint availability lets distribution precede evaluation. So many of the problems with bias and delay are mitigated by distribution (availability) coming ahead of the review filter. This lets expert readers tap into an information stream, which they can filter for themselves.
Experts doing their own filtering has come up before in HighWire’s work. In researcher interviews that we conducted in 2014, we saw some conflicting commentary: readers were telling us that journal brand was important to identifying articles to read, but they also told us is was irrelevant – sometimes the same researcher told us both. When we pursued this, we found the key, handed to us by a neuroscience postdoc (to paraphrase): “When I’m reading an article in an area in which I’m expert, I don’t really care where it is published, and don’t need peer review – I can do my own review; for articles outside my expertise, I rely on other experts to review it first.”
This is a clear argument in favor of preprint servers: they get articles in front of all the potential expert readers fast. To borrow a phrase from the conference, preprints don’t “impede science”. They don’t polish it either.
This consensus for preprint servers emerged in the morning discussion on the second day of the conference. I don’t recall seeing Harold Varmus at the second day of the meeting (he was at the first day) – if he were there he might have been bemused recalling the horrified reaction to his “E-biomed” preprint server proposal in 1999!
For further reading:
The summary report from the conference is extensive and well-edited. Pages 8-10 are about the peer review discussion. The full four-day meeting agenda is also online, along with links to audio files for those who really want the play-by-play!
Prof. Fyfe and I will be joined by colleagues — John Inglis, who heads Cold Spring Harbor Labs; Dr Simon Kerridge, Director of Research Services at University of Kent and Chair of the Association of Research Managers and Administrators; and Dr Kirsty Edgar, Leverhulme Early Career Research Fellow, at the School of Earth Sciences, University of Bristol — at the upcoming ALPSP meeting at Heathrow for a panel discussion of Peer Review: Evolution, Experiment, and Debate, on Friday morning, 11 September 2015.