Recently, the method of using graph neural network based on skeletons for action recognition has become more and more popular, due to the fact that a skeleton can carry very intuitive and rich action information, without being affected by background, light and other factors. The spatial–temporal graph convolutional neural network (ST-GCN) is a dynamic skeleton model that automatically learns spatial–temporal model from data, which not only has stronger expression ability, but also has stronger generalisation ability, showing remarkable results on public data sets. However, the ST-GCN network directly learns the information of adjacent nodes (local information), and is insufficient in learning the relations of non-adjacent nodes (global information), such as clapping action that requires learning the related information of non-adjacent nodes. Therefore, this paper proposes an ST-GCN based on node attention (NA-STGCN), so as to solve the problem of insufficient global information in ST-GCN by introducing node attention module to explicitly model the interdependence between global nodes. The experimental results on the NTU-RGB+D set show that the node attention module can effectively improve the accuracy and feature representation ability of the existing algorithms, and obviously improve the recognition effect of the actions that need global information.