This is version 0 of an abstract to be presented at altmetrics11.
Research funding bodies are under pressure to maximise the wider impact of their investment in research and to demonstrate this impact to the wider community and to their ultimate funders in government. Measuring and comparing impact, whether in the form of direct or indirect economic outcomes, social outcomes such as education and mobility, or community outcomes including health and quality of life is challenging, perhaps impossible, due to the inherent mismatch in the time frames of politics and the exploitation of research. Nonetheless the rhetoric of government is quite sophisticated in looking for outcomes beyond peer reviewed papers and demanding that contributions to the wider well being of the community are recognised (1).
At the same time, more open approaches to scientific communication are increasingly seen as ways to improve the overall return on investment of public money into scientific research (2, 3). However uptake of such approaches — for example publishing in Open Access journals, public archiving of research data, and making source code available — has been limited (3). It is widely thought that effecting the desired cultural changes towards openness in the research community depends on creating the right incentives (see e.g. 4). Current reputation systems for researchers are based almost entirely on two measures: the prestige of the journals in which they publish and their research funding income. These measures are tightly coupled: obtaining grant funding is highly dependent on a record of publishing in prestigious journals.
I propose that what links these two themes – the desire for more open practice, and the need to demonstrate impact, is the measuring of re-use (5). The argument for open practice is that it enables re-use, particularly unexpected re-use of research outputs, thereby maximising return on the public investment. While there may be arguments over exactly what “impact” means or should mean, at its centre is the use of academic outputs for a diverse range of purposes that lie beyond pure research. Furthermore I would argue that measurement of re-use is a credible alternative to current prestige based metrics such as a journal’s impact factor. The popularity of the H-index (6) and the rapid growth of interest in article level metrics (7) – both measures, albeit crude, of re-use – shows that the idea of measuring influence, impact, and re-use is understood and comprehensible to stakeholders.
Re-assessing the idea of impact and particularly its measurement in terms of re-use has three significant advantages. Firstly it aligns the pragmatic interests of research funders much more closely with the advocates of open research practice, including patient advocates, researchers, and the wider research-interested community. While open research advocacy may be seen as a philosophical stance for many advocates the aims are pragmatic; more rapid access to new treatments, effective environmental policy; wider public engagement. By surfacing a strongly pragmatic rationale for open research practice, viewing impact as re-use provides a common vocabulary and set of aims that can connect open research advocacy with the realpolitik of research funders and their relationship with the ultimate funders.
Secondly taking this pragmatic approach enables a rationale tensioning of the philosophical basis of open research advocacy with need to optimise outcomes in the real world. There are places and situations where “pure” open approaches are either counterproductive or impossible to apply. Applying a pragmatic and, in principle, measurable criterion for optimisation enables these edge cases to be identified and can again provide a shared vocabulary for discussion of the optimum approaches. Finally re-use and re-usability provides a measurement construct which can enable a rational approach to the question of how to optimize research practice given the impossibility of predictive the value of research outcomes in advance. Where research is speculative the criterion of re-usability can be applied. Has the research been communicated and archived in such a way that its potential for re-use has been maximised? While the outputs of much research will see limited re-use, particularly in the short term, researchers can be judged on the extent to which they have maximised the ability of others to build upon or re-use the outputs of their research.
Reconfiguring our view of research impact through measuring re-use has many advantages but poses significant challenges in the development of viable metrics. The measurement of re-use is non-trivial even in the relatively well described and captured world of academic peer reviewed journals. However, as research communication on the web starts to take off, there is an opportunity to embed effective systems of measurement into the technical and cultural communication infrastructure. The technical expansion of measures of use by bibliographic tools and through citation on social networking and broadcast services provides a model of what can be done technically. The question is whether the community of researchers, funders, and the public have the will to build the cultural norms that make these measurements systems effective.
1. Science Innovation, and the Economy, speech by David Willets, UK Minister of State for Universities and Science, Royal Institution, 9 July 2010, http://www.bis.gov.uk/news/speeches/david-willetts-science-innovation-and- the-economy
2. Jenny Fry, Suzanne Lockyer, Charles Oppenheim, John Houghton and Bruce Rasmussen (2009) Identifying benefits arising from the curation and open sharing of research data produced by UK Higher Education and research institutes http://ie- repository.jisc.ac.uk/279/1/JISC_data_sharing_finalreport.doc
3. Research Information Nework (2010) Open Science Case Studies http://www.rin.ac.uk/our-work/data-management-and-curation/open-science- case-studies
4. Julia Lane (2010) Let’s make science metrics more scientific, Nature 464: 7288 http://www.nature.com/nature/journal/v464/n7288/full/464488a.html
5. Cameron Neylon (2010) Metrics of use: How to align researcher incentives with outcomes http://cameronneylon.net/blog/metrics-of-use-how-to-align- researcher-incentives-with-outcomes/
6. J E Hirsch (2005) An index to quantify an individual’s scientific research output, Proceedings of the National Academy of Sciences (USA), 102(46): 16569 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1283832/
7. C Neylon and S Wu (2009) Article-Level Metrics and the Evolution of Scientific Impact. PLoS Biol 7(11): e1000242. doi:10.1371/journal.pbio.1000242