Uncertain research country rankings. Should we continue producing uncertain rankings?
Categoría del artículo: Research Papers
Publicado en línea: 13 may 2025
Páginas: 161 - 182
Recibido: 21 mar 2025
Aceptado: 24 abr 2025
DOI: https://doi.org/10.2478/jdis-2025-0030
Palabras clave
© 2025 Alonso Rodríguez-Navarro, published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
Purpose
Citation-based assessments of countries’ research capabilities often misrepresent their ability to achieve breakthrough advancements. These assessments commonly classify Japan as a developing country, which contradicts its prominent scientific standing. The purpose of this study is to investigate the underlying causes of such inaccurate assessments and to propose methods for conducting more reliable evaluations.
Design/methodology/approach
The study evaluates the effectiveness of top-percentile citation metrics as indicators of breakthrough research. Using case studies of selected countries and research topics, the study examines how deviations from lognormal citation distributions impact the accuracy of these percentile indicators. A similar analysis is conducted using university data from the Leiden Ranking to investigate citation distribution deviations at the institutional level.
Findings
The study finds that inflated lower tails in citation distributions lead to undervaluation of research capabilities in advanced technological countries, as captured by some percentile indicators. Conversely, research-intensive universities exhibit the opposite trend: a reduced lower tail relative to the upper tail, which causes percentile indicators to overestimate their actual research capacity.
Research limitations
The descriptions are mathematical facts that are self-evident.
Practical implications
The ratios between the number of papers in the global top 10% and 1% by citation count to the total number of papers are commonly used to describe research performance. However, due to variations in citation patterns across countries and institutions with reference to the global pattern, these ratios can be misleading and lose their value as research indicators.
Originality/value
Size-independent research performance indicators, obtained as the ratios between paper counts in top percentiles and the total numbers of publications, are widely used by public and private institutions. This study demonstrates that the use of these ratios for research evaluations and country rankings can be highly misleading.