ACM Web Science Conference 2012 Workshop
Evanston, IL, 21 June 2012
University of North Texas, USA
The altmetrics community, according to its manifesto, has grown around the assumption that the use of peer review as a filtering mechanism for quality scholarship has outlived its usefulness in the changing landscape of scholarly communication (Priem et al. 2010). It is certainly uncontroversial that the introduction of more web-based platforms for the dissemination and discussion of research has changed, and continues to change, the culture of scholarly communication. Journal articles, however, remain the dominant means of disseminating original research in many academic fields, and researchers continue to rely on peer review as a quality screening measure at multiple stages of both the research process and personal career advancement.
Peer review, of course, is not without faults (see especially Cole, Cole, and Simon 1981; Peters and Ceci 1982). Priem and Hemminger (2012) argue that pre-publication journal peer review has not undergone a major transformation since its inception in the 17th century proceedings of Philosophical Transactions, concluding that the map of peer review no longer corresponds to the shifting territory of academic research. Evidence detailing the shortcomings of peer review is plentiful, but of notable importance is that the sheer glut of information facing modern scholars further aggravates the attempts of peer review to keep up with recent innovations in scholarly communication. Priem et al. (2010) respond to this by proposing that better filter(s) are sorely needed; altmetrics are an attempt to provide just that. Altmetricians propose that the simple answer to the shortcomings of current impact measures, including peer review, is more: more and more diverse metrics to capture the significance of a piece of academic work (Neylon and Wu 2009), based on which its importance becomes clear. However, I argue that the altmetrics community should resist the attempt to supplant peer review with a host of altmetrics, no matter how diverse.
Any evaluation scheme is simultaneously a system of incentives, and so assessing the impact of research according to a suite of altmetrics will inevitable steer research in particular directions, as peer review has done. An assertion of the value of democratizing venues for academic communication, and therefore the importance of a diversity of measures to provide as complete a picture as possible of the scope of research impact, is implicit in the manifesto. Thus, one of the goals of altmetrics is to promote greater physical access to academic research for non-academics, thereby working to make research more accountable to its public benefactors (Priem et al. 2010). For example, because altmetricians value democratization of research assessment, altmetrics challenge traditional notions of who counts as a peer, a valuable check on the potential for peer review practices to become old boy’s networks. But the desire to democratize impact implicitly expressed by the authors seems to be at odds with their explicit claim that better filters are needed to limit the volume of information to which academics are subjected. On the other hand, Neylon and Wu (2009), while embracing the goal of democratization, treat filters as a form of prioritization. They are adamantly opposed to ‘any response that favours publishing less,’ which they call nonsensical ‘either logistically, financially, or ethically.’
This reflects an important contradiction inherent in altmetrics themselves. Any measure of impact for published academic work already incorporates prior judgments from various peer review processes (Frodeman, Holbrook, and Barr 2012). Journal articles are subject to pre-publication peer review; citations, web mentions, or any example of an article’s re-use – to use Neylon’s (2011) terminology – only accrue to articles which have been vetted by peer review judgments. But even prior to pre-publication peer review, the research that actually makes it to publication was previously subject to grant peer review to determine whether it was worth funding, and the researchers themselves were (and continually are) subject to peer review evaluations of their academic portfolios for the purposes of departmental evaluations, promotions, hiring, and tenure.
Article impact measures, then, do not escape the shortcomings of the peer review judgments upon which they are based and which they reflect, regardless of what kind of review process takes place, be it traditional expert panels, web-based crowdsourcing, or third-party external certification (such as that proposed by Priem and Hemminger). Peer judgments rendered via web-based crowdsourcing would speed up the review process, certainly, and would make reviews visible and accountable, but the judgments rendered will not be immune to the promotion of conventionality (i.e. groupthink, or cultural exclusivity on a larger scale), nor would it limit the volume of research published. While using altmetrics to filter for impactful or significant research would somewhat democratize the evaluation of published research, it would also centralize and concentrate decisions regarding the direction of research trajectories into the hands of those who design and administer the metrics – only a subsection of the academic community.
Traditional peer review is a significant check on this capacity for concentration of power because it is the principle means through which academics assert their autonomy, including the right to determine the direction of research as a community. As Chubin and Hackett (1990) argue, peer review serves a multiplicity of functions – epistemic, sociological, political, and economic – within academic communities; that is, peer review itself has a more diverse meaning and functionality than just winnowing out what is excellent scholarship from what is not. The ‘added value’ of peer review for scholarly communication, the establishment of research trajectories, and the evaluation of academic work is that it is conscious of its own activity of judgment-making. Peer review allows for the exercise of autonomy within a framework that has the capacity to be self-critical. Thus, as the value of democratizing knowledge gains momentum in the academic community, altmetrics and peer review can balance one another: the former questions the latter on who counts as a peer, and the latter challenges the former on whose judgment counts in evaluating quality, impactful scholarship.
Chubin, D.E. and E.J. Hackett (1990) Peerless Science: Peer Review and U.S. Science Policy. Albany, NY: State University of New York Press.
Cole, S., Cole, J.R., and G.A. Simon (1981) Chance and Consensus in Peer Review. Science, 214(4523): 881-886.
Frodeman, R., Holbrook, J.B., and K. Barr (2012) The University, Metrics, and the Good Life, in P. Brey, A. Briggle, and E. Spence (eds.) The Good Life in a Technological Age. Routledge Studies in Science, Technology, and Society. New York, NY: Routledge.
Neylon, C. and S. Wu (2009) Article-Level Metrics and the Evolution of Scientific Impact. PLoS Biology, 7(11): e1000242. doi:10.1371/journal.pbio.1000242.
Neylon, C. (2011) Re-use as Impact: How Re-assessing What we Mean by “Impact” Can Support Improving the Return on Public Investment, Develop Open Research Practice, and Widen Engagement. Available at: http://altmetrics.org/workshop2011/neylon-v0/.
Peters, D.P. and S.J. Ceci (1982) Peer-review Practices of Psychological Journals: The Fate of Published Articles, Submitted Again. Behavioral and Brain Sciences, 5(2): 187-195.
Priem J. and B.M. Hemminger (2012) Decoupling the Scholarly Journal. Frontiers in Computational Neuroscience, 6(19). doi: 10.3389/fncom.2012.00019.
Priem, J., Taraborelli, D., Groth, P., and C. Neylon (2010) altmetrics: A Manifesto. Available at: http://altmetrics.org/manifesto/.
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.