The need for more sophisticated altmetric indicators: A proposal for the categorization and development of aggregate indicators

The 2015 Altmetrics Workshop
Amsterdam, 9 October 2015

Fereshteh Didegah
Timothy D. Bowman
Kim Holmberg


Indicators are proxies for defining and measuring variables that cannot be directly measured. In bibliometric and scientometric studies, indicators are used for quantifying impact, quality, and significance of research including authors, journals, institutions, and countries. The first bibliometric and scientometric indicators emerged in the 1960s for research evaluation purposes, but the main reason for developing such indicators was the dramatic expansion of information and the need for control over it (Leydesdorff, 2005). Hinze and Glanzel (2013) categorized scientometric indicators into four groups: (1) productivity as measured by publication counts; (2) collaboration as measured by co-authorship; (3) impact as measured by citation counts; and (4) cognitive structures as measured through co-word or co-citation analyses. Many indicators have been introduced and developed that combine and integrate raw publication and citation counts. A number of them, such as journal impact factor and author h-index, have found increasing popularity and recognition and are widely used in research evaluation studies and processes, even if both factors have shortcomings and have been criticized in scientometric research (Jasco, 2012; Vanclay, 2009).
Altmetrics is a new generation of imetrics (Milojević & Leydesdorff, 2013) used for measuring the impact of online events and interactions. These “alternative” metrics have been used to create new indicators for research assessment. One purpose of using altmetrics is to measure different types of academic research impact, rather than focusing solely on scientific impact as measured by citations. Altmetric indicators are becoming more widely recognized and popular, however they have been criticized for being vague with regards to their meanings and impact levels. One reason for this criticism is that they differ by functionality, data source, users, usage purposes, etc. A few classifications have grouped them based on their application or perception levels (Junping & Houqiang, 2015; Colledge, 2014), but many simply report altmetric indicators without noting these differences.
Altmetric indicators provided by altmetric tools such as or ImpactStory only retrieve raw counts of events (readership rate, tweets, blog posts, etc.) for an article, which recent research has shown should not be used as a stand-alone measure for evaluation without acknowledging the differences in measuring impact types, and their vulnerability over time and across subject domains (
Using an automated algorithm, has given a weighted score to indicators; the altmetric score considers Facebook counts, Reddit events, or YouTube mentions less than news and blog posts by weight. These weightings may be matters of debate, however, as the altmetric indicators do not remain static over the course of time.
Moreover, altmetric indicators capture many types of events and even more can be expected to emerge if and when changes to the tools and websites are made. Given that research is also found to have different types of societal impact—including scientific, cultural, economic, educational and environmental—more indicators seem to allow for a broader understanding of the impact of research, but many indicators cause confusion and many questions arise on which indicator(s) should be preferred to measure different types of impact, including: (1) Can altmetric event counts represent some type of impact? (2) Can a single indicator represent many types of impact or are aggregate indicators required? (3) Does the importance of indicators vary over time or across subject domains? To answer these questions, it is important to develop aggregate indicators to reduce the parameters needed to convey the precise state of an event.
Previous work has attempted to categorize altmetric indicators by representing similar functionality or from similar data sources or to categorize them into different groups. For instance the categorization presented by Snowball Metrics (Colledge, 2014) in which the indicators are categorized by academic activities, academic commentaries, social media tools, and mass media groups or another strategy presented by Junping and Houqiang (2015) where they categorized the indicators by application, social media use, and perception levels. The logic of such categorizations can be applied for developing combined altmetric indicators. Many studies are increasingly contributing towards clarifying different aspects of altmetrics. Yet, except for a recent unpublished work, no previous studies have discussed the need for combined and normalized altmetric indicators.
This work seeks to suggest an aggregate indicator for each group of so-called “alternative” metric indicators. The debates surrounding a proper classification of indicators and giving weights to the events will be the main focus of the workshop presentation and discussions with the hope that the interaction can lead to future work and development in this area.

Supplementary materials