Accès libre

Zero-sample text classification algorithm based on BERT and graph convolutional neural network

À propos de cet article

Citez

In this study, we undertake a comprehensive examination of zero-shot text classification and its associated implications. We propose the adoption of the BERT model as a method for text feature representation. Subsequently, we utilize the Pointwise Mutual Information (PMI) metric to adjust the weight values within a graph convolutional neural network, thereby facilitating the construction of a text graph. Additionally, we incorporate an attention mechanism to transform this text graph, enabling it to represent the output labels of zero-shot text classification effectively. The experimental environment is set up, and the comparison and ablation experiments of the text classification model based on BERT and graph convolutional neural network with the baseline models are carried out in several different types of datasets, and the parameter settings of λ are adjusted according to the experimental results, and the convergence of the BERT model is compared to test the robustness of the model performance and the classification effect. When λ was set to 0.60, the model achieved the best results in each dataset. When the task is set to 5-way-5-shot, the convergence rate of the model for the Snippets dataset using the penultimate layer of features can reach 74%-80% of the training accuracy at the 5,000th step. The training accuracy gradually flattens out in the first 10,000 steps, and the model achieves classification accuracy in all four learning scenarios, with good stability.

eISSN:
2444-8656
Langue:
Anglais
Périodicité:
Volume Open
Sujets de la revue:
Life Sciences, other, Mathematics, Applied Mathematics, General Mathematics, Physics