Citation distributions and research evaluations: The impossibility of formulating a universal indicator
Catégorie d'article: Research Papers
Publié en ligne: 19 nov. 2024
Pages: 24 - 48
Reçu: 16 juil. 2024
Accepté: 21 oct. 2024
DOI: https://doi.org/10.2478/jdis-2024-0032
Mots clés
© 2024 Alonso Rodríguez-Navarro, published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
Purpose
To analyze the diversity of citation distributions to publications in different research topics to investigate the accuracy of size-independent, rank-based indicators. The top percentile-based indicators are the most common indicators of this type, and the evaluations of Japan are the most evident misjudgments.
Design/methodology/approach
The distributions of citations to publications from countries and journals in several research topics were analyzed along with the corresponding global publications using histograms with logarithmic binning, double rank plots, and normal probability plots of log-transformed numbers of citations.
Findings
Size-independent, top percentile-based indicators are accurate when the global ranks of local publications fit a power law, but deviations in the least cited papers are frequent in countries and occur in all journals with high impact factors. In these cases, a single indicator is misleading. Comparisons of the proportions of uncited papers are the best way to predict these deviations.
Research limitations
This study is fundamentally analytical, and its results describe mathematical facts that are self-evident.
Practical implications
Respectable institutions, such as the OECD, the European Commission, and the U.S. National Science Board, produce research country rankings and individual evaluations using size-independent percentile indicators that are misleading in many countries. These misleading evaluations should be discontinued because they can cause confusion among research policymakers and lead to incorrect research policies.
Originality/value
Studies linking the lower tail of citation distribution, including uncited papers, to percentile research indicators have not been performed previously. The present results demonstrate that studies of this type are necessary to find reliable procedures for research assessments.