Re-use as Impact: How re-assessing what we mean by “impact” can support improving the return on public investment, develop open research practice, and widen engagement [v0]

This is version 0 of an abstract to be presented at altmetrics11.

Cameron Neylon

Research funding bodies are under pressure to maximise the wider impact of their investment in research and to demonstrate this impact to the wider community and to their ultimate funders in government. Measuring and comparing impact, whether in the form of direct or indirect economic outcomes, social outcomes such as education and mobility, or community outcomes including health and quality of life is challenging, perhaps impossible, due to the inherent mismatch in the time frames of politics and the exploitation of research. Nonetheless the rhetoric of government is quite sophisticated in looking for outcomes beyond peer reviewed papers and demanding that contributions to the wider well being of the community are recognised (1).

At the same time, more open approaches to scientific communication are increasingly seen as ways to improve the overall return on investment of public money into scientific research (2, 3). However uptake of such approaches — for example publishing in Open Access journals, public archiving of research data, and making source code available — has been limited (3). It is widely thought that effecting the desired cultural changes towards openness in the research community depends on creating the right incentives (see e.g. 4). Current reputation systems for researchers are based almost entirely on two measures: the prestige of the journals in which they publish and their research funding income. These measures are tightly coupled: obtaining grant funding is highly dependent on a record of publishing in prestigious journals.

I propose that what links these two themes – the desire for more open practice, and the need to demonstrate impact, is the measuring of re-use (5). The argument for open practice is that it enables re-use, particularly unexpected re-use of research outputs, thereby maximising return on the public investment. While there may be arguments over exactly what “impact” means or should mean, at its centre is the use of academic outputs for a diverse range of purposes that lie beyond pure research. Furthermore I would argue that measurement of re-use is a credible alternative to current prestige based metrics such as a journal’s impact factor. The popularity of the H-index (6) and the rapid growth of interest in article level metrics (7) – both measures, albeit crude, of re-use – shows that the idea of measuring influence, impact, and re-use is understood and comprehensible to stakeholders.

Re-assessing the idea of impact and particularly its measurement in terms of re-use has three significant advantages. Firstly it aligns the pragmatic interests of research funders much more closely with the advocates of open research practice, including patient advocates, researchers, and the wider research-interested community. While open research advocacy may be seen as a philosophical stance for many advocates the aims are pragmatic; more rapid access to new treatments, effective environmental policy; wider public engagement. By surfacing a strongly pragmatic rationale for open research practice, viewing impact as re-use provides a common vocabulary and set of aims that can connect open research advocacy with the realpolitik of research funders and their relationship with the ultimate funders.

Secondly taking this pragmatic approach enables a rationale tensioning of the philosophical basis of open research advocacy with need to optimise outcomes in the real world. There are places and situations where “pure” open approaches are either counterproductive or impossible to apply. Applying a pragmatic and, in principle, measurable criterion for optimisation enables these edge cases to be identified and can again provide a shared vocabulary for discussion of the optimum approaches. Finally re-use and re-usability provides a measurement construct which can enable a rational approach to the question of how to optimize research practice given the impossibility of predictive the value of research outcomes in advance. Where research is speculative the criterion of re-usability can be applied. Has the research been communicated and archived in such a way that its potential for re-use has been maximised? While the outputs of much research will see limited re-use, particularly in the short term, researchers can be judged on the extent to which they have maximised the ability of others to build upon or re-use the outputs of their research.

Reconfiguring our view of research impact through measuring re-use has many advantages but poses significant challenges in the development of viable metrics. The measurement of re-use is non-trivial even in the relatively well described and captured world of academic peer reviewed journals. However, as research communication on the web starts to take off, there is an opportunity to embed effective systems of measurement into the technical and cultural communication infrastructure. The technical expansion of measures of use by bibliographic tools and through citation on social networking and broadcast services provides a model of what can be done technically. The question is whether the community of researchers, funders, and the public have the will to build the cultural norms that make these measurements systems effective.


1. Science Innovation, and the Economy, speech by David Willets, UK Minister of State for Universities and Science, Royal Institution, 9 July 2010, the-economy

2. Jenny Fry, Suzanne Lockyer, Charles Oppenheim, John Houghton and Bruce Rasmussen (2009) Identifying benefits arising from the curation and open sharing of research data produced by UK Higher Education and research institutes http://ie-

3. Research Information Nework (2010) Open Science Case Studies case-studies

4. Julia Lane (2010) Let’s make science metrics more scientific, Nature 464: 7288

5. Cameron Neylon (2010) Metrics of use: How to align researcher incentives with outcomes researcher-incentives-with-outcomes/

6. J E Hirsch (2005) An index to quantify an individual’s scientific research output, Proceedings of the National Academy of Sciences (USA), 102(46): 16569

7. C Neylon and S Wu (2009) Article-Level Metrics and the Evolution of Scientific Impact. PLoS Biol 7(11): e1000242. doi:10.1371/journal.pbio.1000242


  1. birukou
    Posted May 4, 2011 at 9:58 pm | Permalink

    The idea of promoting re-use is very nice, even though, I think we need better metrics than H-index and article-level metrics.
    For instance, if one’s approach is largely based on the ideas developed by someone else – is it positive re-use? I would argue it is, but some people would say that the novelty of such a paper is little. Another aspect to consider is that of self-reuse – if I write a journal paper which in 70% consists of my conference paper – is that good? And if I have 5 such papers? I would argue that such re-use must be always made explicit so that the scientific community can judge whether the increment is sufficient. One last aspect, only briefly mentioned in the article, is the re-use of scientific results in “the real world”, which brings no citations, but is probably even more valuable. How do we track those – through number of patents created, companies using the approach, etc.?

  2. cameron
    Posted May 6, 2011 at 8:56 pm | Permalink

    Alex, I agree absolutely that we need better metrics than H-index and that we need to track a much more diverse range of types of citation. The work of David Shotton and others on citation typing ontologies is particularly interesting here. So influence (and influence is a form of re-use) should be tracked and data citation is important and yes tracking research outcomes into policy and “real world” settings is hard, but I don’t think impossible. Is what you’re saying that you’d like to see some more concrete examples of how this might work?

  3. Daniel Silvestre
    Posted May 9, 2011 at 3:23 pm | Permalink

    While re-use seems to be a sensible alternative to popular metrics, it seems to suffer from the same network effects that plague them (e. g. preferential attachment, large ring/group effects, etc.). To be effective, an alternative metrics that want to promote open science should be able to value community effects more than “monetizing” ones. Re-use of science not always mean a increase in quality/knowledge. It could be just a by-product of maximising profits. The NGS machines case is an excelent example. A lot of very pragmatic re-use, but still very little science and the risky personal genomics. Anyway, the question still remains: what a metrics should measure to point where to go to satisfy science and community?

Post a Comment

Your email is never shared. Required fields are marked *