Altmetrics: Peer Evaluation, a case study [v0]

This is version 0 of an abstract to be presented at altmetrics11.

Aalam Wassef

Peer Evaluation is an independent Open Access and Open Scholarship online initiative. It lets peers share their primary data, articles and scholarly projects under any shape and form. These works are then openly reviewed, disseminated and discussed by the community. All social interactions and evaluations are then aggregated  and presented as datasets of qualitative indicators of authority, impact and reputation. Peer Evaluation is also keen on diversifying and promoting social processes of dissemination.

In the proposed presentation I will attempt to recount the pre-­conditions that needed to be fulfilled so that Peer Evaluation’s community may be encouraged to produce, rely, and believe in the benefits of open review and qualitative indicators of reputation, authority and trust.

The pre-­conditions that seemed necessary to be fulfilled may be gathered under three categories: Opportunities, Endeavours and Imperatives.

Opportunities as in how the Social Web has put efficient dissemination tools at everyone’s reach, so that institutional or self-­‐produced quality content may be easily disseminated and exposed to the general public and to qualified peers.

Endeavours as in what we wish to achieve and challenge in terms of openness, dataset standardization, transparency, accessibility, open scholarship and cultural diversity.

Finally, Imperatives as in what needs to be guaranteed to scholars and professionals during this “transitional period” in which we witness, on one hand, the coexistence of traditional peer reviewing, publishing, restricted access, copyright restrictions, impact factor(s) and, on the other hand, alternative practices exploring open and collective peer reviewing, qualitative indicators of scientific authority, simultaneous formal and informal dissemination of scholarly works, multi-format publications and social indicators of impact and trust.

Opportunities

The concept of virality was well established before Facebook standardized the concept of sharing which instaneously exposes a digital object to your friends and friends of friends. Before 2007, one would rely on cherry picked and timely tags, tagging platforms (de.li.cio.us, digg it etc.) and simultaneous postings in targeted blogspheres and relevant online communities. If the content was good, it wouldn’t take long for its authors to witness a snowball effect and, if the content kept coming, all the while remaining qualitative, the authors would be promised to high rankings in search engines, large numbers of subscribers, comments, views and downloads, most of which are quantitative measures of popularity. All of this remains true, except that opportunities have only became greater and require less effort. Although the dissemination game may seem obvious to some, not all seem to enjoy the benefits of such affordable and efficient processes, processes that could tremendously serve scholarly endeavours. In that respect Peer Evaluation is somehow a dissemination station, a budding concept that will benefit from reciprocal integration with other Open Access platforms, projects and repositories.

Endeavours

Open Access, free or as free as possible, fair, transparent, Open Scholarship, multilingual, exportable, standardized datasets… Regarding all of the above, I would like to evoke two main challenges we face with Open Access and Open Scholarship.

Open Access

We envision Open Access and platforms such as Peer Evaluation, and hopefully many others, as the future of publishing institutions and private publishing companies. It would be hard to understand why the publishing industry would frown upon communities who help the collective production, reviewing and selection of works that deserve a publisher’s attention. These works can then be expanded, shortened, edited to better fit journals, reviews, interactive media…

Open Scholarship is a challenge on many levels, but it was that very challenge that led us to establishing the beginning of a solution to a crucial issue with many implications: trustworthy mechanisms of trust in open social networks. It would have been easy to only accept academics with academic email addresses, hence being able to verify their affiliation, academic status, alleged authority etc. We would have then replicated established hierarchies, reducing chances to promote researchers, research centers and individuals who suffer from those very hierarchies. Peer Evaluation established a self-­‐protective and egalitarian mechanism of trust and endorsement which — as the community’s interactions will multiply — is meant to evolve and be perfected with nuanced parameters.

Imperatives

As Cameron Neylon put it, it seems indeed imperative to go beyond the impact factor. Many agree that citation counts, H and other indexes are insufficiently reliable and meaningful to determine, as much as they do, a scholar’s fate. Stevan Harnad, Peter Suber and organizations such as SPARC also express the quasi emergency of opening access to research.

Beyond the imperatives mentioned above, I would like to briefly conclude these pages by highlighting the imperative of DOI democratization. In just a decade, DOIs have become one of the cornerstones of scholarly publishing, increasing the visibility, searchability, citations and impact factors of certain works, authors, journals and institutions all the while excluding even more those who have not been assigned or cannot be assigned DOIs for their valuable contributions to human development.

One of Peer Evaluation’s most essential pre-­‐conditions was to establish itself as a publisher and a member of Crossref so that qualitative contributions that would have been peer reviewed and accepted by its authoritative members may be assigned a DOI, then be published in Peer Evaluation’s Open Review, an open access journal whose publications are produced, reviewed and accepted by a community of qualitative peers.

Finally Peer Evaluation is rich with a wide variety of qualitative and social indicators of impact, authority and trust that are comprehensive and that promote reviewer accountability. These datasets await our collective work to be experimented, perfected, shared, exported so that they may become, one day, as relied upon as the quantitative metrics they wish to complement.

One Comment

  1. birukou
    Posted May 28, 2011 at 10:50 am | Permalink

    Hello Aalam, this is a nice summary of what is required for building open evaluation community. I like that you let anyone (and not only professional scientists) join and comment – it would be great to see general public more involved in science.

    Regarding DOI – do you envision DOI assigned to the “accepted” preliminary version of an article or only to the “final” version (is there such?:)).

    Finally – what about incentives – why one would post the work on PeerEvaluation, why one would go there and see and review what others posted?

Post a Comment

Your email is never shared. Required fields are marked *

*
*