Feature Analysis and Application of Music Works Based on Artificial Neural Network
Published Online: Feb 27, 2025
Received: Oct 04, 2024
Accepted: Jan 26, 2025
DOI: https://doi.org/10.2478/amns-2025-0130
Keywords
© 2025 Yu Wang et al., published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
At present, the classification model of Music App genres is unstable, the recognition is wrong, the recognition form is single, and the music signal features are simple, which leads to the low accuracy of classification and recognition. In order to efficiently and accurately judge the background music types of various scenes, assist users to quickly obtain favorite music types, push corresponding music songs, and improve users’ frequency of using music app. The music style classification based on artificial intelligence neural network proposed in this paper is called BP for short. In order to enable the laboratory to use a complete music style classification model, we must first use the music library in Python as the data warehouse to extract music works, and classify the corresponding timbre, tone, musical instrument playing background and other signals in the works as the input for subsequent model training. The multi particle ant algorithm is used as a tool to calculate the optimal neural network value. The weight and average value of the bidirectional neural network are calculated as a function. The particle shape, velocity and position are calculated. When the conditions are met, the neural network is output. PCA (principal component analysis) and LDA (linear discriminant analysis) data dimensionality reduction methods are used to analyze the data view, to illustrate that the function operation used is effective and the data is reasonable. Finally, the result is obtained by processing the calculated data. In this way, a theoretical model of music style recognition and classification is constructed, and the proposed model is compared with the traditional classification model. The accuracy of the proposed model is 99.12%, which is better than the traditional model, indicating the characteristics of sound quality, rhythm and melody.