Journal Quality Factors from ChatGPT: More meaningful than Impact Factors?
Article Category: Research Papers
Published Online: May 07, 2025
Page range: 106 - 123
Received: Oct 12, 2024
Accepted: Dec 10, 2024
DOI: https://doi.org/10.2478/jdis-2025-0016
Keywords
© 2025 Mike Thelwall et al., published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
Purpose
Journal Impact Factors and other citation-based indicators are widely used and abused to help select journals to publish in or to estimate the value of a published article. Nevertheless, citation rates primarily reflect scholarly impact rather than other quality dimensions, including societal impact, originality, and rigour. In response to this deficit, Journal Quality Factors (JQFs) are defined and evaluated. These are average quality score estimates given to a journal’s articles by ChatGPT.
Design/methodology/approach
JQFs were compared with Polish, Norwegian and Finnish journal ranks and with journal citation rates for 1,300 journals with 130,000 articles from 2021 in large monodisciplinary journals in the 25 out of 27 Scopus broad fields of research for which it was possible. Outliers were also examined.
Findings
JQFs correlated positively and mostly strongly (median correlation: 0.641) with journal ranks in 24 out of the 25 broad fields examined, indicating a nearly science-wide ability for ChatGPT to estimate journal quality. Journal citation rates had similarly high correlations with national journal ranks, however, so JQFs are not a universally better indicator. An examination of journals with JQFs not matching their journal ranks suggested that abstract styles may affect the result, such as whether the societal contexts of research are mentioned.
Research limitations
Different journal rankings may have given different findings because there is no agreed meaning for journal quality.
Practical implications
The results suggest that JQFs are plausible as journal quality indicators in all fields and may be useful for the (few) research and evaluation contexts where journal quality is an acceptable proxy for article quality, and especially for fields like mathematics for which citations are not strong indicators of quality.
Originality/value
This is the first attempt to estimate academic journal value with a Large Language Model.