Catégorie d'article: Research Paper
Publié en ligne: 18 août 2021
Pages: 139 - 163
Reçu: 19 juin 2021
Accepté: 23 juil. 2021
DOI: https://doi.org/10.2478/jdis-2021-0032
Mots clés
© 2021 Zheng Xie, published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Purpose
We proposed a method to represent scientific papers by a complex network, which combines the approaches of neural and complex networks.
Design/methodology/approach
Its novelty is representing a paper by a word branch, which carries the sequential structure of words in sentences. The branches are generated by the attention mechanism in deep learning models. We connected those branches at the positions of their common words to generate networks, called word-attention networks, and then detect their communities, defined as topics.
Findings
Those detected topics can carry the sequential structure of words in sentences, represent the intra- and inter-sentential dependencies among words, and reveal the roles of words playing in them by network indexes.
Research limitations
The parameter setting of our method may depend on practical data. Thus it needs human experience to find proper settings.
Practical implications
Our method is applied to the papers of the PNAS, where the discipline designations provided by authors are used as the golden labels of papers’ topics.
Originality/value
This empirical study shows that the proposed method outperforms the Latent Dirichlet Allocation and is more stable.