ACM Web Science Conference 2012 Workshop
Evanston, IL, 21 June 2012
Public Library of Science, USA
Although still emergent, the altmetrics movement is quickly advancing in establishing a new paradigm for measuring scholarly research impact beyond the prevailing journal level-metric. At this early stage of the movement, altmetrics must instill confidence and build credibility, providing irreproachable proof points. PLoS is a part of these efforts and continues to develop its Article-Level Metrics (ALM) program, started in 2009. Beyond the commitment to delivering the highest of quality at PLoS, we consider data reliability and validity to be an imperative in our ALM implementation. In the brief abstract that follows, we will describe the overall high-level vision of how data integrity can be achieved, which we wish to support with concrete examples at the workshop.
Although still emergent, the altmetrics movement is quickly advancing in establishing a new paradigm for measuring scholarly research impact beyond the prevailing journal level-metric. PLoS is a part of these efforts and continues to develop its Article-Level Metrics (ALM) program, started in 2009. We continue to expand the set of ALMs offered to cover more channels of disseminations as proxies of impact and built tools to make the data more useful to the research community. We also continue to tackle the issue of data integrity as an integral part of building a strong foundation for our ALM system and more broadly, altmetrics at large.
While altmetrics is still gaining credibility amongst the broader research community made up of researchers; institutional decision-makers; funders; and publishers, data integrity is of utmost importance. At this early stage of the movement, altmetrics must instill confidence and build credibility, providing irreproachable proof points. Reliable and valid data is of paramount importance for establishing trust, even as the methods necessary to ensure this are continuing to evolve and be refined. Much is at stake in research evaluation that extends beyond the original researcher involved, but departments and institutions as well through performance assessment exercises. Altmetrics will increasingly inform funding, hiring, tenure, and promotion decisions. Beyond the commitment to delivering the highest of quality at PLoS, we consider data reliability and validity to be an imperative in our ALM implementation. In the brief abstract that follows, we will describe the overall high-level vision of how data integrity can be achieved, which we wish to support with concrete examples at the workshop.
In our ALM advocacy efforts, we have learned that gaming is a widespread concern of researchers, institutional decision-makers, publishers, and funders. Indeed, one of the hallmark features of altmetrics is in fact the difficulty of gaming a system comprised of a multi-dimensional suite of metrics, setting it apart from the impact factor’s vulnerabilities. That said, we strive to construct a holistic system – policies and processes as well as the enabling technologies – aimed at offering the highest quality of valid and reliable data possible. Our strategy is briefly described as follows:
1) Policies and processes
Data irregularity is a wide-ranging category encompassing multiple causes, effects, and resolution pathways. In fact, not all fall under the domain requiring concern or change. We have conceptualized four separate groupings:
- regular behavior flagged as inconsistencies in previous data. Data generation and aggregation are operating fine, but usage activity in data channels has shifted and our expectations of “normal behavior” (e.g., search parameters) require tweaking.
- irregular behavior that does not reflect intentional misuse of ALMs by person or machine (e.g., an article which experienced elevation of visibility from a third party source). It is a one-off instance that may have continued spillover effects, but does not reflect an overall change in the use of the data channel.
- intentional but not targeted attempt to increase ALMs (e.g., a bot trolling randomly targeted web pages). We comply with COUNTER 3′s requirements to exclude its defined list of robots from the usage stats. We also added a few more robots that we saw in our log files that were not part of the COUNTER 3 list. As we see new robots in the log files, we add them to the exclusion list.
- intentional and willful attempt by an individual or group of individuals to manipulate results (e.g., professor asking students to visit an article multiple times, involuntary article hits via automatic website redirects, etc.). This category is more commonly referred to as “gaming.”
From these four categories, we have instituted a process for investigating and resolving the identified data irregularity, whether it involves refining our understanding of expected (“normal”) behavior of the data source activity or in most cases, unearthing the specific technical issue for articles that have come to our attention and respond accordingly.
The response that we take follows from our overall approach to data integrity governance. It is broken down by the separate natures of our two high-level groups of data sources: internal and external. With internal PLoS usage data (page views, downloads, comments and ratings, etc.), we have full control and responsibility of data generation, collection, and integration into the ALM application. Of the set of external channels (i.e., all other data sources), we work to ensure that the data generated outside of PLoS is fully and accurately integrated into our system. These two sets of conditions result in separate levels of oversight and divergent resolution pathways.
2) Supporting technology: DataTrust
In support of the data integrity strategy, we are developing DataTrust, an audit and notification system in our ALM application, which keeps watch over metrics activity and alerts us when there is unexpected activity in the data (drops or spiked increase). A nightly task flags articles that experience activity beyond a set of parameters, defined by us as incongruous behavior, and reports it in an email to us. The application has an open framework, which allows us to modify audit parameters for existing data channels as the as normal activity levels change and establish new ones for future ALMs. It will check for the number of page views and PDF downloads based on its publication timeframe. It flags articles viewed in high numbers from a singular IP as well as a similar referrer or agent. It also does so for papers, which meet conditions from multiple data sources (ex., any paper which is receiving more than 300 HTML page views to every 1 PDF download in any given day). As with the ALM application, we will freely release DataTrust once it is launched at PLoS.
The machine review by DataTrust is then combined with human oversight, ensuring that the audit covers the entire corpus, while unique irregularities are not overlooked by the machine. Cross-validation of all ALMs against non-PLoS application data (e.g., Total Impact, PLoS Explorer, Science Card) require manual work of going to the original sources to verify the continued functioning of the API integration.
But as a completely open and transparent set of metrics, ALMs are also monitored by the research community at large. Academia currently operates well with its existing ethos of “policing by the community.” For example, plagiarism or data manipulation are forms of abuse for which academics are investigated and ostracized from their community once they are found to be guilty. We expect similar community norms to evolve with respect to gaming ALMs where willful manipulation of altmetrics comes to be treated as a professional offense and reported to respective institutions for further action. In addition to transparency, accountability is a key ingredient to trust. And since the chances of discovering gaming are higher, the risk of doing so would be magnified, thereby lowering the incentive to do so.
PLoS continues to build on its vision of implementing a system that ensures data integrity. That said, the policies, processes, and supporting technologies are considered initial efforts in the service of contributing to a larger, holistic system encompassing the entire research community, and so we continue to consider them as preludes to the collective effort ahead of us. It is our hope that best practices will arise from the representation of a broad base of community members through formal discussion such as those at the altmetrics12 workshop.
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.