In this study, we undertake a comprehensive examination of zero-shot text classification and its associated implications. We propose the adoption of the BERT model as a method for text feature representation. Subsequently, we utilize the Pointwise Mutual Information (PMI) metric to adjust the weight values within a graph convolutional neural network, thereby facilitating the construction of a text graph. Additionally, we incorporate an attention mechanism to transform this text graph, enabling it to represent the output labels of zero-shot text classification effectively. The experimental environment is set up, and the comparison and ablation experiments of the text classification model based on BERT and graph convolutional neural network with the baseline models are carried out in several different types of datasets, and the parameter settings of