Uneingeschränkter Zugang

Reform of Traditional Music Teaching Methods and Cultivation of Students’ Musical Creativity on Digital Platforms

, , ,  und   
03. Feb. 2025

Zitieren
COVER HERUNTERLADEN

Introduction

In recent years, computer technology has developed vigorously, and the Internet has become a “necessity” for students’ studies and people’s lives. Especially in school education, digital music teaching has attracted people’s attention and attention, and its superiority has been unanimously recognized [13].

In fact, digital music teaching is a form of music education that relies on diversified information technology to cultivate students’ ability to recognize, appreciate and create music, etc. Through the implementation of digital music teaching, many of the difficulties and problems faced in music teaching in the past can be solved, promote the diversification of the music classroom, stimulate the student’s interest in learning, and set up a platform for the student’s musical creativity [48].

The use of digital technology, music teaching can be through sound, image, text and other ways to share information with the students so that the traditional music classroom teaching mode changes so that the student’s learning resources are richer so as to create extraordinary music classroom experience for the students, in music teaching to form a huge innovation [910]. In music teaching, digital music technology can be applied to classroom teaching, teaching strategies, teaching evaluation and many other aspects of the music classroom so as to implement intelligent reform of the entire teaching process, thus promoting the enhancement of its teaching effect and help students to further enhance the effectiveness of music learning [1113].

In short, with the continuous development of science and technology, music teaching methods are also constantly improving. Digital music technology is one of the more cutting-edge music teaching methods, which provides more diverse music teaching equipment for music teaching. Teachers can operate professional equipment to let students have a more profound experience, which not only helps to improve students’ musical literacy, but also provides better conditions for the improvement of their musical creativity [1415]. At the same time, the generation of digital technology also enables students to obtain richer music resources, broadens students’ horizons, helps students form more intuitive musical feelings so that students can perceive the charm of music from a multi-dimensional perspective and experience the joy of music learning [1617].

With the rapid development of time and society, the application of digital technology in music education has become more and more extensive, relying on computer technology and so on. It is possible to build a digital music classroom. In a way, there is a certain degree of inevitability in the realisation of such a huge transformation. Merchán-Sánchez-Jara, J. F et al. conceptualised a structured and sequential framework to assist in popular music co-composition and analysed the structural layering, rhythmic foundations and sonic melodies of rock songs [18]. Camlin, D. A et al. examined the impact of the digital transformation of music teaching in the context of the epidemic on students while targeting recommendations for the transformation of digital education in music [19]. Pendergast, S introduces a multifunctional music teaching classroom model, i.e., the Digital Audio Workstation (DAW) and provides an in-depth analysis of character music as well as the music creation process around the guiding framework of this DAW, contributing to the design of a music teaching methodology based on the DAW [20]. Huang, B examined how composition software affects students’ interest and motivation in learning electronic music, demonstrated through teaching practice, and found that classroom instruction based on composition software increased students’ interest, confidence, and concentration [21]. Mandanici, M et al. outlined music pedagogical practices using digital pedagogical materials and categorised music digital materials, which in turn led to the design of a clear music pedagogical framework with different orientations for music teachers’ use of digital technologies [22]. Liu, X developed an online teaching tutorial for piano music. The online course possesses three modules. In the investigation, it was found that the Little Leaf AI Piano Tutor app performed the best. The app integrated the piano playing skills as well as the piano playing thinking, supplemented with a digital learning plan, which effectively enhanced the students’ piano playing learning [23].

Musical creativity is the specific expression of creativity in the field of music, which conceptually inherits the intellectual and mental quality attributes of general creativity and also takes novelty and appropriateness as the basic elements. Kaplan, D. E attempted to integrate creativity theories into instructional design in order to analyse the roles played by creativity theories in the classroom and instructional projects and made a positive contribution to the cultivation of creative thinking in students [24]. Ng, D. T et al. based on semi-structured interview and teacher observation method, learnt that students’ motivation and creativity were enhanced in the flipped music classroom teaching model, which confirmed the value and feasibility of online music teaching in a special period [25]. Bishop, L examined the distribution of creativity in musical cooperation, made relevant analyses in combination with creativity theory, and concluded that the behaviours of ensemble members and their social and cultural behaviours are related to creativity theory. It is argued that the interaction between ensemble members’ behaviours and society and the environment is the basis of their creativity [26]. Cremata R conducted an empirical investigation of popular music teaching based on the constructivist approach to learning, and at the same time challenged the idea that a student-centred teaching model can increase students’ subjective initiative and argued that a classroom with a certain degree of control is more conducive to students’ creativity and the development of inclusive and collaborative skills [27]. Kutlimuratovich, A. B talks about the Kodaly approach to music education, covering its birth, development, improvement, and occasions of application, and attempts to understand the underlying logic and principles of the Kodaly approach to music education for incorporation into the music teaching classroom [28].

In this paper, the extraction of sound domain features is achieved by calculating the pitch D value in the singing voice for music teaching, and the main features such as fundamental frequency perturbation, resonance peak perturbation and average energy are extracted from the musical voice. The extracted music sound features are inputted into the recurrent neural network model, and a long and short-term memory artificial neural network is introduced to improve the performance of the model. The self-attention mechanism is then used to optimise the recurrent neural network model to reduce the influence of redundant data on the model according to the weight size assigned by the attention layer and thus extract more effective music sound data. Subsequently, the improved combinatorial neural network CLSA model is proposed to visualize song sounds in music teaching. Finally, under the guidance of multiple intelligences theory, the digital music teaching system is designed based on B/S architecture to realise the reform of music teaching by combining the objectives and principles of the reform of traditional music teaching methods. In this study, after the validation of the separation performance of the CLSA model for the human voice and noise etc., in music, the impact of digital teaching platforms on students’ comprehensive music performance as well as music creativity is analysed by conducting teaching experiments.

Method
Music feature extraction method
Tone range extraction method

A common method of estimating the range is to take the maximum and minimum values in the pitch of the song and song score, while the mean and standard deviation are generally taken as a measure in order to avoid chance in the data and to improve the accuracy. So in order to find the range of the pitch range, the first step is to get the value of the pitch, and the calculation of pitch D is described below.

Pitch is determined by the frequency of vibration of the object (vocal cords). That is, the more the object (vocal cords) vibrates at a certain time, the higher the pitch. Conversely, the lower the number of vibrations, the lower the pitch. However, pitch is an abstract thing, and its value can only be obtained from the fundamental curve. Therefore, in the experiment, the pitch is often described by D, which can reflect the value of the pitch, so the pitch D is defined as: D=12*log2(F÷F0)

This is the difference between the pitch F in Hertz and the reference frequency F0. D is the pitch in degrees. In vocal music, an octave has a range of 12 units, so if you subtract 12 from the value of D, it just matches the scale in the piano. In this paper, the reference frequency is 80Hz, i.e. F0 = 80. The reason for choosing 80 as the reference frequency is that many researchers have concluded that this not only avoids negative values of pitch, but also makes the pitch error between boys and girls small, so this is used to calculate the D. The measurement of the D value is often based on experience, and the calculation of the D value should include the pitches of all the songs and tunes in the score.

The average of all the pitches in the song and the score D¯ is shown in equation (2): D¯=1Nj=1NDj

In the formula N is the number of audio samples and Dj is the pitch of the jrd tone.

The standard deviation σ of all pitches in the song and the score is shown in equation (3): σ=E[(DjD¯)2]j=1,2,,N

Where E[…] is the mean value, N is the number of audio samples, and Dj. is the pitch of the jth tone.

Base frequency and resonance peak perturbation extraction

Fundamental frequency perturbation [29] is defined as the rate of change between the acoustic fundamental frequency of one cycle and the acoustic fundamental frequency of the next cycle. Fundamental frequency perturbation is often used as a measure of the value of the change in sound waves in the corresponding cycle, which essentially reflects how fast or slow the sound band vibrates between cycles.

The mathematical definition of fundamental frequency perturbation is given in equation (4): jitter=1N1i=1N|1/F0i1/F0(i1)|

Where jitter is the mean value, N is the number of audio samples, and F0i is the pitch of the ith tone.

Resonance peak perturbation is defined as the rate of change between the resonance peak of one cycle and the resonance peak of the next cycle. Resonance peak perturbation is often used as a measure of the change in resonance peaks from one cycle to the next, and is essentially a reflection of the quality of the singing voice or the skill level of the singer. In this case, the first resonance peak perturbation is a measure of the rate of change of the first resonance peak between adjacent weeks, and the third resonance peak perturbation is a measure of the rate of change of the third resonance peak between adjacent weeks.

The mathematical definition of the first resonance peak perturbation is: 1N1i=1N|1/Fli1/Fl(i1)|

In Eq. F1i. represents the first resonance peak of the ind cycle and N is the number of audio samples.

The mathematical definition of the third resonance peak perturbation is given in Eq. (6): 1N1i=1N|1/F3i1/F3(i1)|

Where F3i. represents the third resonance peak of the ind cycle and N is the number of audio samples.

Average energy extraction

The average energy is defined as a representation of the amount of signal in an identical environment. Average energy is often used as a measure of the relative size of singing signals. The average energy is mathematically defined as: En=k=+x2(k)w(nk)

In Eq. En is the energy for a short period of time, x(k) is the input signal, and w(n–k) is the window function for n–k segments.

Neural network based music visualisation methods
Recurrent Neural Network Construction

Recurrent neural network architecture [30] performs the same operation on each element in the data sequence and calculates the next output from the previous input and the last output, so that the previous information can be related to the current information, which has a better performance for dealing with sequential sequences, and therefore is commonly used in dynamic regression problems, where the formula calculates RNN units: ht=tanh(Wh[ht1,xt]+bh)

However, the output of RNN depends on the current information and loses its context over time. Therefore, for the RNN long-term dependence problem, a Long Short-Term Memory Artificial Neural Network (LSTM) is usually introduced. LSTM is a special form of RNN, based on RNN. LSTM introduces cells to remember information in long data sequences, which overcomes the phenomenon of gradient vanishing and gradient explosion in RNN.

LSTM mainly adds three gate settings to each cell, containing an input gate, an output gate and a forgetting gate. Among them, the function of the forgetting gate is mainly to decide whether to keep or delete some information from the previous unit state. The function of the input gate is to selectively record the information of the cell state at the current moment and output it in the output gate, where the formula of the forgetting gate is: ft=σg(Wfst+Ufct1+bf)

The input gate vector is given by: it=σg(Wist+Uict1+bi)

The output gate vector is given by: ot=σg(Wost+Uoct1+bo) ct=ft°ct1+it°σh(Wcst+bc) ht=σh(ot°ct)

Where σg is the Sigmoid activation function, σh is the hyperbolic tangent function, is the element-by-element multiplication notation, f, i, o is the forgetting gate vector, the input gate vector and the output gate vector, respectively, and W, U, b is the weight matrix and bias vector of each gate. For the input gates, the output of the previous moment is weighted and summed with the input of the current moment and processed using the Sigmoid function to obtain a value from 0 to 1 to select the proportion of the current input information, i.e., 1 to keep all the information and 0 to forget all the information.

Improved combinatorial neural network models

To ensure the reliability of model training, this paper adds a self-attention mechanism layer [31] after the recurrent network layer to optimize the LSTM network. Compared with the traditional attention mechanism, the self-attention mechanism can not only obtain the dependency relationship between the source sequence and the target sequence, which the attention mechanism cannot obtain, but also effectively obtain the dependency relationship between the source sequence or the target sequence itself. It is a sequence coding layer that can assign weight coefficients and does not occupy a large amount of computer memory. When the input information at moment t is more similar to the target information, the weight assigned by the attention layer at moment t is larger, so it can reduce the influence of redundant data very effectively and extract more effective music data, and its weighted feature vector is calculated as: Attention(Q,K,V)=SoftMax(QKTdk)V

Where the three elements as inputs Q, K, V are query sequence, key sequence and value sequence, dk which can be adjusted for oversized dot product results. In the self-attention mechanism the three inputs take the same value and after mapping Q, K, V through the parameter matrix do the self-attention and repeat it many times and then finally, all the results are concatenated and sent to the fully connected layer.

In summary, based on the temporal and continuous nature of music, the use of a single neural network will be less effective. In this regard, this paper constructs an improved combinatorial neural network CLSA as the base model. To ensure that the CNN architecture works well, VGG16 is used, which is a pre-trained CNN with 16 weighted layers. The input size of the VGG is 224*224 RGB image, and the kernel size is 3*3. The network consists of 13 convolutional layers, 5 maximal pooling layers, and 3 fully connected layers using ReLu and SoftMax as activation functions.

All the weights of the convolutional and pooling layers are frozen, and the last fully connected layer is not frozen and can be replaced, for which the main strategy is to fine-tune it. The one-dimensional vector features of the time-frequency diagram will be output through its fully connected layer, which will then be downscaled through the FC layer to reduce the number of parameters required and fed back to the LSTM layer as an input to obtain the short-term and long-term dependencies. The LSTM output can be connected to 2 fully connected layers to improve the robustness of the whole network, and finally, the final Valence and Arousal values are predicted by regression through the fully connected layers [32] to achieve the visual display of students’ music sounds during music teaching.

Construction of digital platform for music teaching
Reform of music teaching methods based on multiple intelligences

Multiple Intelligence Theory [33] is proposed by Harvard University psychological development scientists in the United States, and its specific framework is shown in Figure 1. Multiple Intelligence Theory is a broad system of intelligence that encompasses novel and applicable concepts related to the eight basic intelligences of human beings. In the reform of traditional music teaching methods, the main advantage of digital learning resources should be played to create a more multiple intelligences classroom teaching environment, mobilise the participation of students’ multiple senses so that the students’ multiple intelligences can be exercised while giving full play to their advantageous intelligence, and to achieve the purpose of improving creativity and interest in music. In addition, in the digital teaching situation, taking music intelligence as the leading role, combined with music teaching training, exercises students’ logical-mathematical intelligence and visual-spatial intelligence, which in turn promotes the development of students’ multiple intelligences. Thus, this theory is a significant theoretical basis for the rational introduction of digital learning resources into music teaching and the reforming of traditional music teaching methods.

Figure 1.

Multiple intelligence theory

Digital teaching system design

In this paper, under the guidance of multiple intelligences theory, while using music speech recognition and music visualisation model as technical support, a digital music teaching platform is designed to realise the reform of traditional music teaching methods. The digital music teaching platform is developed based on the B/S structure [34], and the system contains a three-layer architecture, including the representation layer, the business logic layer and the data access layer, and the specific architecture is shown in Fig. 2, and the role of each layer is described below.

The representation layer is responsible for providing a visual operation interface, allowing users to input information to submit to the backend and make data requests, and is the interface for users and the system to communicate with each other. It is roughly divided into three main types: user information management UI, song information management UI, and teaching management UI.

Business Logic Layer, the core part of the system, by encapsulating each function point and organising it into atomic function points, is provided to the representation layer for calling and realising the user’s request. In this layer, a large number of function points are included, such as adding users, deleting songs, uploading assignments, and so on. Both the music sound recognition method and the music visualisation display model proposed above are applied in the business logic layer to assist in teaching music innovation.

The data access layer, which is the interface to access the file system or database, is called by the business logic layer to achieve Select, Insert, Update, Delete and other operations on the data table, and access, update, delete and other operations on the disk file.

Figure 2.

Digital teaching platform design

The greatest advantage of using music sound recognition and visualisation models in the designed digital platform is the ability to synchronize the presentation of spectra and sounds, so that the auditory and visual are in harmony. This intuitive way allows students to listen to music while looking at the spectrum, feel the close relationship between sound and audio more clearly, and deepen their knowledge and understanding of the elements in the music song and their ability to recognise the spectrum by using their ears and eyes. At the same time, this novel, shocking sensory effects and stereoscopic sound will attract students’ attention and inspire them to participate in the study of music, and in this teaching process also train students’ visual-spatial intelligence and musical-rhythmic intelligence.

Results and discussion
Analysis of the effectiveness of the music source feature extraction model

There is often more noise in the music teaching classroom, and the accompaniment when students sing the songs will have an impact on the recognition and visual presentation of the sounds. Therefore, in this paper, the performance of the proposed model for music sound separation is verified before applying it to the music digital teaching platform.

Experimental environment and data set

Experimental environment

The experiments carried out in this paper use the open-source Python deep learning library Pytorch, which can easily build deep neural networks. The code is relatively intuitive and supports both CPU training mode and GPU acceleration for training.

Experimental dataset

The MIR-1K dataset and the DSD100 dataset were selected for the experiment. Among them, the MIR-1K dataset is a music dataset for separating vocals and accompaniment, which contains 1000 music clips. In this paper, only 189 music clips are selected as the training set during the experiments using the MIR-1K dataset, and the remaining 811 music clips are used as the validation set. The DSD100 dataset is a professional dataset for the task of separating music sources, and it contains 100 complete song data in various styles. The DSD100 dataset has been divided into a training set and a test set, which each contain 55 songs. While using this dataset for supervised learning, use the training set for learning and validate it using both the training and test sets.

Assessment of indicators

The current metrics used to assess the quality of separated music source signals are mainly analysed using the Blind Source Separation Assessment Toolkit. This performance evaluation metric decomposes the signal to be evaluated ŝ(t) after modelling into four components as follows: s^(t)=starget(t)+einterf(t)+enoise(t)+eartif(t)

In the above equation starget(t) denotes the effective signal component of the predicted target source signal si(t), and einterf(t) is the disturbance signal component contained in the predicted acoustic signal. enoise(t) is the disturbance noise in the predicted acoustic signal. eartif(t) is the “artifact” part of the predicted acoustic signal.

The toolkit officially uses four quantitative metrics to calculate the global performance of the separated audio signal: Source Distortion Ratio (SDR), Source Interference Ratio (SIR), Source Noise Ratio (SNR) and Source Artifact Ratio (SAR). The higher the value of these indicators, the better the robustness and low noise performance of the evaluated signal, and the better the separation effect of the algorithm, so these indicators are all used as the evaluation indicators in this paper. Their specific formulas are defined as follows: SDR=10log10 starget(t) 2 einterf(t)+enoise(t)+eartif(t) 2 SIR=10log10 starget(t) 2 eartif(t) 2 SNR=10log10 starget(t)+einterf(t) 2 enoise(t) 2 SAR=10log10 starget(t)+einterf(t)+enoise(t) 2 eartif 2

In order to obtain a more accurate performance evaluation metric, the metric SDR needs to be normalised to obtain NSDR, which is defined as follows: NSDR(s^,x,s)=SDR(s^,s)SDR(x,s)

Where s represents the music source signal separated by the algorithm, x represents the original mixed music signal, and s represents the pure music source signal. By calculating the SDR values of the separated and pure source signals and the SDR values of the original mixed music signal and the pure music source signal, respectively, and then subtracting them to obtain the normalised metrics NSDR, the above metrics calculated from all the segments of the whole evaluated signal are further weighted and averaged, to obtain GNSDR, GSIR, and GSAR.

Results of validity analyses

In order to investigate the effectiveness of the CLSA model proposed in this paper, the model is used to compare with the current mainstream models for music feature extraction. The comparison models are SHN-4, PSHN, NCRPCAi, EFN and Att-Ret+Coh. The separation performance of different methods on the MIR-1K dataset is shown in Table 1. In the comparison of deep neural network based music source separation models, the CLSA model proposed in this paper achieves the best performance on GNSDR for vocals and GSAR for accompaniment. Compared with SHN-4, it improves 2.08dB, 1.41dB and 1.07dB in GNSDR, GSIR and GSAR of vocals, and 0.61dB in GSAR of accompaniment, respectively. In addition, the CLSA model lags behind the best model (Att-Ret+Coh) in GNSDR metrics of accompaniment only by 0.07dB. On the whole, this paper’s proposed CLSA model has reached the performance performance of current mainstream models on the MIR-1K dataset.

Different methods of separation performance indicators on MIR-1K

Method Vocal (dB) Accompaniments (dB)
GNSDR GSIR GSAR GNSDR GSIR GSAR
NCRPCAi 6.58 10.48 10.45 6.94 12.48 8.32
EFN 8.54 13.62 10.85
SHN-4 8.64 14.25 11.59 9.52 13.78 12.27
PSHN 9.51 14.88 11.81 9.56 13.96 12.48
Att-Ret+Coh 9.87 15.6 12.32 9.65 13.97 12.85
CLSA 10.72 15.66 12.66 9.58 14.09 12.88

The SDR metrics of the music sound separation performance of the different methods on the DSD100 dataset are shown in Table 2. It can be seen that the CLSA model proposed in this paper obtains the best separation performance in the separation tasks of Drums (4.49 dB) and Other (2.75 dB) and the second and third separation performance in the Bass (2.23 dB) and Vocals (5.43 dB) separation tasks, respectively. Compared to the Att-Ret+Coh model, the CLSA model improves the SDR of two music sources, Drums and Other, by 0.04dB and 0.11dB, respectively, and lags by only 0.02dB on Vocals. This proves the effectiveness of the music sound recognition methodology used in this paper, which incorporates the network model with the adaptive force mechanism. Overall, the CLSA model proposed in this paper has achieved the performance of current mainstream models on the DSD100 dataset.

Different methods are evaluated in the DSD100 validation set

Method Bass (dB) Drums (dB) Other (dB) Vocals (dB)
NCRPCAi 1.61 4.05 2.33 5.19
EFN 1.71 4.08 2.39 5.36
SHN-4 1.8 4.13 2.45 5.43
PSHN 2.08 4.44 2.59 5.50
Att-Ret+Coh 2.24 4.45 2.64 5.45
CLSA 2.23 4.49 2.75 5.43
Research on the application of digital music teaching platform
Experimental design for teaching music

Teaching Subjects

This paper chooses School S as the research site, in which students’ interest and creativity in learning music are generally low. Based on this, this paper applies a digital platform for music visualisation and teaching to the music classroom in School S to develop students’ musical comprehension, sensibility, and creativity through visual aids. Before experimenting in this study, this paper created a music test paper with three music teachers in School S based on the criteria described in the academic quality test of music in the curriculum standard. Through interviewing teachers’ preparation before class and teaching techniques used by the teachers, we learnt about the student’s mastery of their existing knowledge of music at this stage and students’ attitudes to music as well as their motivation to learn. In this study, four classes, (1), (5), (6) and (8), of a certain grade in School S were chosen as the sampling classes for this study, with little difference in the number of sampling classes, male and female gender ratios, and comparable levels of students’ overall performance, and the randomly assigned classes (1) and (5) as experimental classes, with the numbers set as A and a, and the classes (6) and (8) as control classes, with the numbers set as B and b, and the digital music teaching platform was applied to the teaching of the experimental classes. The specific sample classes are shown in Table 3. Combined with the results of the experimental pre-test, the average total scores of the students in classes A and A were 57.10 and 56.94, and the average scores of the students in classes B and B were 56.02 and 54.87, respectively. From the data of the music test scores, it was concluded that there was not much difference in the level of competence of the sampled classes, and in terms of the overall level, the students’ level of knowledge of music was not significantly different, and there was no significant difference in the overall literacy competence of music in the sampled classes. There is no significant difference. Therefore, the selection of these four classes as experimental and control classes has a certain degree of scientific validity and accuracy.

Experimental Variables

The independent variable of this study is the teaching method. The experimental class utilizes a digital teaching platform for music classroom teaching, completing classroom teaching and improving students’ musical creativity and literacy. The control class uses the traditional teaching method. This study is dependent on the students’ interest in music and the changes in their creative abilities and literacy through teaching. The extraneous variables of the teaching experiment of this study were the total class size, the ratio of male to female students, and the teacher of the class.

Sample class analysis

Class Girls Boys Total number Average performance
Class A 36 29 65 57.10
Class a 34 26 60 56.94
Class B 32 34 66 56.02
Class b 28 35 63 54.87
Analysis of the effectiveness of the platform’s visualisations

During the innovative music teaching, students in class A and class A performed more actively to test whether the application of the digital music teaching platform has a positive effect on students’ music singing. After one month of teaching, 15 students each from class A and class a, a total of 30 students, with a male to female ratio of 1:1, were selected to sing excerpts of the songs learnt in the class. The singing audio was analysed using the music recognition model proposed in this paper, and the visual model was used to quantitatively identify the pitch and rhythm of the students’ singing audio, which can objectively and fairly assess the degree of completion of the students’ singing. The results of the platform visualisation are shown in Figure 3. (a) and (b) represent the visualisation of the singing results of students in class A and class B, respectively. The black box represents the standard pitch and rhythm of the music melody, and the red curve represents the students’ audio and the less overlap it has with the black box, the greater the deviation of the students’ pitch. After calculating and counting the 15 notes in the first section of the song, the average correctness of the students’ singing intonation in Class A and Class A is 81.49% and 85.64%, respectively, which indicates that music teaching based on the digital platform a good teaching effect in the experimental classes.

Figure 3.

Analysis of the visual effect of digital platform

Results of students’ general performance tests in music

The results of the comprehensive performance test in music for students in the four classes are shown in Table 4. Combining the analyses, it can be seen that the overall academic performance in music class in the four sampled classes before the experiment was not comparable. The digital music teaching platform’s participation, along with the correct rate of the post-experiment test paper and the overall performance of students in the classroom, are crucial for achieving this goal. Students in grades A and A achieved a higher overall achievement after the experiment (78.94 points, 79.64 points) than they did before the experiment (57.10 points, 56.94 points). It was also found that the average scores of the students in the two classes on the individual questions of the general music test increased, indicating that the students’ abilities in the basic knowledge of music, expression of musical works, analysis and comprehension improved. In addition, there were significant differences in the test scores before and after the music teaching activities in classes A and A (P=0.004, 0.001<0.05).

After the comprehensive achievement of music

Class Mean SD t P
Before After Before After
Class A 57.10 78.94 6.13 7.98 -2.809 0.004
Class a 56.94 79.64 6.38 6.36 -2.660 0.001
Class B 56.02 62.38 8.18 8.20 -1.860 0.059
Class b 54.87 65.94 8.12 6.41 -2.922 0.051
Analysis of the Effectiveness of Cultivating Musical Creativity

After a semester of music teaching and learning activities, this paper presents a detailed analysis of the musical creativity of the students in the four classes at the beginning and at the end of the semester, based on the dimensional analysis table of musical creativity development in related studies. The results of analysing the musical creativity of students in different classes are shown in Figure 4, with (a)-(d) representing the results of analysing the musical creativity scores of students in classes A, a, B and b, respectively. The effects of musical creativity development were analysed in terms of appropriateness (5 points), fluency (5 points), variability (5 points), and novelty (5 points). From the musical creativity scores before the semester, the average musical creativity scores of the students in the four classes A, a, B and b were 7.59, 7.47, 7.43 and 7.10, respectively, which indicated that the level of musical creativity of the students did not differ much before the beginning of the experiment. After one semester of experimental teaching, the mean musical creativity scores of the students in classes A and A were 12.01 and 12.88, respectively, which was a significant increase (p<0.05) compared to the level before the semester.

Figure 4.

Music creativity analysis results

In contrast, the level of musical creativity of students in classes B and B was 9.18 and 8.22 points, which was an improvement but still at a low level. It can be seen that with the assistance of the digital music teaching platform, teachers will emphasise students’ music creativity requirements in the design of teaching sessions and guide students to carry out valuable and meaningful music creativity activities. At the same time, there are weekdays to pay attention to students creating music in multiple ways and from multiple perspectives, while teachers have the consciousness to let students adapt and apply what they have learnt in the process of digital music learning to carry out music creation activities.

Conclusion

In the context of education informatization, the implementation of digital teaching resources in the music classroom significantly contributes to fostering students’ interest in acquiring music knowledge and enhancing their musical potential and creativity. In this study, we input the extracted music features into a recurrent neural network model, incorporating a self-attention mechanism to enhance the neural network model’s performance and build a combinatorial neural network CLSA model that enables the visual processing of music sound. Based on this model, a digital music teaching platform is designed to innovate and reform traditional music teaching methods. Research findings:

This paper proposes the CLSA model, which achieves the best performance on GNSDR for vocals and GSAR for accompaniment. Compared with the SHN-4 model, it improves 2.08 dB, 1.41 dB, and 1.07 dB on GNSDR, GSIR, and GSAR of the human voice, respectively. It has also achieved the performance of the current mainstream model on the DSD100 dataset.

The visualisation results show that the average correct rate of the singing pitch of the students in Class A and Class A applying the digital teaching platform is 81.49% and 85.64%, respectively, indicating that music teaching based on the digital platform has a good teaching effect in the experimental classes.

There were significant differences in the music comprehensive test scores before and after the music teaching activities were carried out in Class A and Class A (P = 0.004, 0.001<0.05). In addition, after one semester of experimental teaching, the musical creativity of the students in classes A and A increased significantly (P<0.05) compared to the level before the semester.

Overall, the continuous application of digital teaching platforms in the future can further stimulate students’ interest in music learning, cultivate their musical aesthetic sentiment and creativity, and enhance the effectiveness of music teaching.

Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
1 Hefte pro Jahr
Fachgebiete der Zeitschrift:
Biologie, Biologie, andere, Mathematik, Angewandte Mathematik, Mathematik, Allgemeines, Physik, Physik, andere