Aligning publishing technology with the funder view of impact

There are few topics in digital publishing that cause so much debate as that of research impact. A lot of this debate – within the publishing world, at least – has tended to focus on ways of improving (or improving on) existing mechanisms. How can we make Impact Factor work better? Should we put less emphasis on the journal and more on the article – or on the author?

Funders, meanwhile, seem to be in a different galaxy.

In UK higher education, changes to the REF (the mechanism by which many research grants are allocated) manifest a desire on the part of funders to know about the practical impact of research work beyond the scholarly bubble – and beyond even the media reaction to published research: they want to know what impact it has had, what difference it has made, in the ‘real’ world.

This difference of perspective was highlighted during a ‘Research Impact Spotlight’ event hosted by Digital Science recently, as reported by Research Information. Reading about this gave a lot of food for thought – not just about the real meaning of impact, but also about how we structure our technology architecture.

Beyond attention metrics

Alongside Jonathan Adams of Digital Science, participants in the event were Ben Goldacre, an advocate for open science whose views we have previously given oxygen to on this blog, Liz Allen of the Wellcome Trust and Euan Adie, founder of Altmetric.

To deal with the last-mentioned first; much attention has been given to altmetrics in the debate over impact, but Adie here seems to say it is not a measurement of impact at all. Both traditional metrics like Impact Factor and new-kid-on-the-block altmetrics can be classified as ‘attention’ metrics – calibrating, respectively, scholarly attention and broader, non-academic attention. But probably the most they can show is that a particular piece of research has changed the way people think about a problem, and possibly that it has changed practice. In terms of showing a social, economic or cultural benefit, however, different measures of impact are needed.

Digital Science has been involved in helping to establish such measures through its work with the Higher Education Funding Council for England (HEFCE). HEFCE has chosen to base its evaluation of impact on case studies submitted by institutions as part of the REF. Digital Science has helped with the analysis of these impact case studies through its work on the 2014 REF impact case studies database and website. In December 2014, for the first time, impact (along with outputs and environment) formed a proportion of the over all quality score given to institutions by HEFCE.

Case studies submitted to the 2014 REF have been text-mined and an initial analysis report issued in collaboration with King’s College London. The report has many pages of brightly-coloured, detailed charts, heat-maps, etc. showing the impact of research on various stakeholder groups in the community. There is a wealth of information here about the effects UK research is having in the wider world, but this case-study-based methodology for evaluating impact, clearly a ‘work in progress’, has shortcomings at present that the report acknowledges in its ‘Lessons for future iterations of the Research Excellence Framework’ section.

Broadly, these are to do with the difficulties of deducing quantitative measures of impact from case study material that is text-based and therefore (arguably) inherently qualitative. Where quantitative measures are submitted, institutions have considerable choice in how they select and present their numbers, making it difficult to compare like with like.

It is a fascinating document to look at, but slightly boggling for anyone without a degree in statistics and a working knowledge of semantic technologies. And although ultimately it feeds into league tables of institutional quality, as a way of measuring impact this case-study-based approach does not narrow down so easily and satisfactorily to a single number as do attention metrics such as Impact Factor.

This is hardly surprising, since IF is based on a purely numerical measure, counting citations. Impact in the wider community, by contrast, produces outputs that can be quantified, but they are many and various, often discipline-specific, and hard to compare. How would you put an increase in life-expectancy within a particular occupational population, for instance, on a par with a decrease in reoffending rates among a particular category of prisoners?

Measuring what really counts

It is important to realize that this is not a problem unique to the scholarly world, but has parallels with many other areas of business life where analytics play a significant part. In the past, evaluative data were difficult and costly to come by, so we tended to base our measures on the few numbers we could get. Latterly, the web has opened a firehose of data that we now struggle to interpret (this is why data scientists are now so hotly in demand across the labour market).

An older, more data-starved age gave us Impact Factor. Assessing research impact back then relied on counting citations because there wasn’t much else to count. Nowadays, we have an infinitely wider choice of data inputs, but somehow the news doesn’t seem to have sunk in.

I think Liz Allen of the Wellcome Trust puts her finger on it when, lamenting the tendency to give too much focus to new tools and technologies, she says that in order to get the most out of these tools, we need a ‘better understanding of where they measure what counts, or only count what can be measured’

Notionally, everything can be measured in this new world. Our focus should be on making the best use of technology to help us measure the things that really count. And it is clear that, in the minds of funders, impact of research in the wider community counts more than impact on the minds of other scholars.

Liz Allen sees opportunities around ‘openness, discoverability and interoperability’ in helping to maximize the impact of funding. It seems to me that publishers have a big part to play in meeting these requirements of funders, a stakeholder of resurgent influence in the new world brought about by OA.

And we, as development partners, need to be building smart technology architectures that also help to realize these aims. At HighWire we have long considered that alignment with the business aims of our clients also involves alignment with the needs of their stakeholders. That’s why we put so much emphasis on UX – but it also means we ascribe a critical importance to just those qualities Liz Allen enumerates in building our technology architecture: openness, discoverability and interoperability.

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s