UCount: a Community-Driven Approach for Measuring Scientific Reputation [v0]

This is version 0 of an abstract to be presented at altmetrics11.

Cristhian Parra (1)
Aliaksandr Birukou (1, 2, 3)
Fabio Casati (1)
Regis Saint-Paul (2, 3)
Joseph Rushton Wakeling (2, 3)
Imrich Chlamtac (2, 3)
1. Department of Information Engineering and Computer Science. University of Trento, Italy
2. CreateNet, Trento, Italy
3. European Alliance for Innovation, Belgium

Introduction

Assessment of research has proven to be a complex issue. Scientific excellence has different meaning across communities [Lamont2009], and it is not easy to determine what are the more relevant characteristics that influence it. The most common approach to simplify the evaluation process is the use of bibliometric indicators such as the h-index [Hirsch2005]. Scientific impact, however, is a multi-dimensional construct that can not be adequately measured by any single indicator [Bollen2009] [Lehmann2006] [Martin1996] and the advent of the web era brought a whole new set of challenges that support this argument. While nothing is essentially wrong with using bibliometrics, we have come to understand that is not enough to measure the full scope of scientific impact [Adler 2009].

The social web opened new ways for disseminating scientific knowledge, thus suggesting that the social dimension can be an important component of scientific reputation. The altmetrics initiative is on the right track to lead this development by analyzing research impact in terms of web and social attention alternative metrics. One use of social metrics is as an extension to bibliometric impact measures (e.g. article download statistics, number of bookmarks on Connotea or Mendeley). However, social impact can involve many other factors, including participation in events or communities or providing comments or reviews of others’ work.

Our intuition is that both social and bibliometric dimensions are important and we propose to combine them for measuring impact. In order to determine which metrics are important for which research community, we propose to analyse the subjective opinions of researchers, and then to combine both social and traditional metrics to approximate this subjective information. Therefore, we propose UCount, a community-driven approach for evaluation of researchers that will give the power of evaluation to the community and to researchers themselves. UCount will do this by enabling researchers to provide feedback, keep track and affect their own community- driven reputation.

The UCount Approach

The UCount approach aims at providing a platform that facilitates the community-based evaluation for different research areas and helps identifying the set of reputation metrics that are perceived as important within a specific scientific community. In an initial setting, we provide two novel Reputation Metrics that will be computed by leveraging on the community opinions: i) the UCount Scientific Impact, and ii) the UCount Reviewer Score.

The UCount Scientific Impact will be computed by means of surveys where members of a community will judge the contribution of the community members to science. The collected community opinions will be analyzed for correlation with both bibliometric indicators and participative/representative measures, obtained from opinions in the community, and the UCount Scientific Impact metric will be computed. For getting bibliometric indicators, we rely on Reseval [Imran2010], a tool developed within the LiquidPub Project. Reseval supports the assessment of scientific research by serving as a gateway to popular scientific impact measurements calculated on the base of crawling different sources of metadata about scientific publications.

For computing UCount Reviewer Score, we will rely on the information collected from the submission system of ICST Transactions, 26 scientific journals that are being launched by the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering (ICST) and European Alliance for Innovation (EAI) in 2011-2012. Authors will be asked to evaluate reviewers using indicators such as

  1. fairness, i.e., authors’ perception that their work got comments it deserved;
  2. helpfulness, i.e., authors’ perception that the reviewers’ comments help to improve the work;
  3. politeness, i.e., authors’ perception on the overall tone of the review. In addition to that, we will analyse the data and apply techniques for review analysis [Casati2010, Ragone2010] to automatically discover:
  4. responsiveness, i.e., how quick the reviewer is in providing comments to the authors;
  5. bias, i.e., how different the reviewers’ marks are biased towards affiliation, country or gender of the authors.
  6. agreement, i.e., how often the reviewers’ marks are different from the marks of the other reviewers.
  7. prediction abilitiy, i.e., how the reviewers’ marks correlate with the later impact (e.g., citation count) of the paper

To bootstrap the process we will rely on the initial self-assigned score as a reviewer that will be updated based on the authors’ evaluations. One example of evaluation of peer review was already proposed before as the Review Quality Instrument (RQI) [VanRooyen1999]. For computing UCount Scientific Impact we will rely on the surveys we run among the members of various scientific communities. Our approach for collecting reputation data consists of the following steps:

  1. Defining scientific communities (we start with communities representing the members of the ICST Transactions and the scientists from the Parlsberg list that are closest to them).
  2. Create surveys for collecting opinions about the subset of the scientists in the defined communities.
  3. Distribute surveys to the members of the communities.
  4. Collect and summarize surveys results (“Community Opinions”).
  5. Collect other bibliometric and social indicators about the members of the communities.
  6. Analyze “Community Opinions” and indicators, looking for combined metrics that approximate the community opinions.

The resulting metric of analyzing “community opinions” in terms of other indicators is what we call the UCount Scientific Impact reputation metric.

Preliminary Results

In our research aimed at understanding what is the meaning of scientific reputation and what factors contribute to it, we used surveys on reputation created for conferences such as BPM, ICWE and the analysis of evaluation committees results that are publicly available (such as contests for research positions). Using the community opinions obtained from the surveys, we performed statistical analysis and data mining aiming at understanding how these opinions correlate with bibliometric indicators. So far, we have found no significant correlation between reputation and these features, including the h-index, g-index and the number of citations [Parra2011] (We have used Reseval, Parlsberg list, and DBLP to calculate these metrics). Figure 1 shows correlation results resulting from this study for each metric analyzed. As it can be seen, correlation is always in the range of non significant correlation according to Kendall- Tau correlation algorithm (-0.5;0.5).

Figure 1. Correlation between traditional bibliometric indicators and reputation rankings

Acknowledgements

Many good ideas in this paper were born in the discussions with the members of the UCount council (Paolo Bellavista, Paul Groth, Peep Kungas, and Cameron Neylon), LiquidPub project, ICST, and EAI. The LIQUIDPUB project acknowledges the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FET-Open grant number: 213360.

References

[Adler2008] R. Adler, J. Ewing and P. Taylor. “Citation Statistics.” Statistical Science 24, no. 1 (February 2009): 1-14. doi:10.1214/09-STS285. http://projecteuclid.org/euclid.ss/1255009002.

[Bollen2009] J. Bollen, H. Van de Sompel, A. Hagberg, and R. Chute. A principal component analysis of 39 scientific impact measures. PloS ONE, 4(6): e6022, 2009. doi:10.1371/ journal.pone.0006022

[Casati2010] F. Casati, M. Marchese, K. Mirylenka, A. Ragone. Reviewing peer review: a quantitative analysis of peer review, 2010. http://eprints.biblio.unitn.it/archive/00001813/

[Imran2010] M. Imran, M. Marchese, A. Ragone, A. Birukou, F. Casati, and J. Laconich. ResEval: An Open and Resource-oriented Research Impact Evaluation tool, 2010. http:// eprints.biblio.unitn.it/archive/00001817/.

[Lamont2009] M. Lamont. How professors think: Inside the curious world of academic judgment. Harvard Univ Pr, 2009.

[Lehmann2006] S. Lehmann, A. D. Jackson, and B. E. Lautrup. “Measures for measures.” Nature 444, no. 7122 (December 2006): 1003-4. doi:10.1038/4441003a. http:// www.ncbi.nlm.nih.gov/pubmed/17183295.

[Martin1996] Martin, B. R. “The use of multiple indicators in the assessment of basic research.” Scientometrics 36, no. 3 (July 1996): 343-362. doi:10.1007/BF02129599. http:// www.springerlink.com/index/10.1007/BF02129599.

[Parra2011] Cristhian Parra, Fabio Casati, Florian Daniel, Maurizio Marchese, Luca Cernuzzi, Marlon Dumas, Peep Kungas, Luciano García-Bañuelos, Karina Kisselite. Investigating the nature of scientific reputation. In Proceedings of 13th International Society for Scientometrics and Informetrics Conference, Durban, South Africa. 4th – 8th July, 2011.

[Ragone2011] Azzurra Ragone, Katsiaryna Mirylenka, Fabio Casati, Maurizio Marchese. A Quantitative Analysis of Peer Review. 13th Conference of the International Society for Scientometrics and Informetrics (ISSI’2011) to appear.

[VanRooyen1999] Rooyen, S van, N Black, and F Godlee. “Development of the review quality instrument (RQI) for assessing peer reviews of manuscripts.” Journal of clinical epidemiology 52, no. 7 (July 1999): 625-9. http://www.ncbi.nlm.nih.gov/pubmed/10391655.

Post a Comment

Your email is never shared. Required fields are marked *

*
*