Accesso libero

The Complementary Role of Artificial Intelligence to Traditional Teaching Methods in Music Education and Its Educational Effectiveness

  
03 feb 2025
INFORMAZIONI SU QUESTO ARTICOLO

Cita
Scarica la copertina

Introduction

In recent years, with the rapid development and application of artificial intelligence technology, its use in various fields is becoming more diverse and intelligent. From smartphones to smart homes to smart cars, the application of its technology facilitates our lives [12]. Music education, as an important part of humanities and arts education, has also gradually begun to explore and utilize artificial intelligence technology, and its application in music education makes up for the shortcomings of traditional teaching methods so that students can get a more efficient learning experience, thus improving the effect of music teaching [35].

At present, the application of artificial intelligence in music education has made a certain breakthrough. First of all, artificial intelligence can help students to learn and train music. Through the intelligent music teaching system, students can play music in a virtual environment and get timely evaluation and feedback [68]. The system can customize the teaching content and practice plan for students according to their playing skills and music perception, which stimulates their learning interests and potential. In addition, AI can assist in the process of music creation and performance. By analyzing a large amount of music data and patterns, AI can generate new music works or provide creative inspiration [912]. For example, AI algorithms can create new musical fragments based on musicians’ creative styles. Such creative aids can not only help music creators improve their work efficiency [1314] but also break the limitations of traditional music creation and create more diverse and innovative musical works. In addition to the field of music learning and creation, AI can also play a role in music performance and interaction [1517].

Ye, F. studied the effectiveness of artificial intelligence in music teaching. By introducing an AI chatbot into piano teaching in a music school to assess its impact on student performance, the results pointed out that AI is beneficial to improving student learning and can be appropriately integrated with music teaching [18]. Yun, G. pointed out that the application of AI in music teaching has been paid attention to, but it also exhibits problems such as insufficient innovation and optimization. A neural network algorithm for innovative optimization of teaching innovation analysis is introduced, and a teaching quality evaluation scheme is developed based on pedagogical theory. The results show that the quality evaluation accuracy and innovation effect of neural network algorithm in AI music teaching under the same evaluation standard is excellent than the traditional teaching mode [19]. Hou, Y. discusses the informatization of music education in colleges and universities under the background of artificial intelligence in order to promote a high degree of integration between music and information education in colleges and universities. Through a survey of students in several colleges and universities to understand the development status of music informatization, the results of sample interviews show that most students believe that the combination of informatization and multimedia has enriched the music teaching resources and helped to promote the communication between teachers and students [20]. Holster,J.discusses the impact of AI on music education, emphasizing that when affirming the benefits of the use of AI, it is also necessary to consider morality in order to ensure a student-centered pedagogical philosophy [21]. Liu, J. examined the mutual integration of artificial intelligence and music and the role of emerging technologies as an aid to music education. Using emotion recognition as an evaluation index, the evaluation role of AI technology in music teaching in colleges and universities was analyzed, aiming to promote the effectiveness of music teaching. The results mentioned that AI, on the whole, creates a good classroom atmosphere, stimulates the enthusiasm of students and teachers, and is conducive to improving the effect of music teaching [22]. Zhang, J. et al. emphasized that AI breaks the traditional mode of music teaching, especially the application of computerized music systems as well as high-intelligence music software, which largely contributes to the improvement of the quality of music teaching.

Additionally, the existing research results are discussed [23]. Yang, H. followed up the traditional music teaching in Chinese colleges and universities and discussed the impact of information technology on its teaching quality. The results of the study show that the application of information technology in music teaching activities is conducive to the completion of music teaching by teachers and the acceptance of knowledge by students, which effectively improves the effect of music teaching [24].

This study builds a smart platform for music teaching, designs a music smart classroom teaching system, and constructs a music intelligent teaching environment using artificial intelligence technology. Through a questionnaire survey, student satisfaction with the music intelligent classroom was analysed. In addition, this study incorporates a user-based collaborative filtering recommendation algorithm in the smart classroom, and analyses its recommendation effect, aiming to provide students with personalized learning material recommendations and improve learning efficiency. Meanwhile, this paper uses the DTW algorithm to match students’ treble features and evaluates and analyzes their music skills based on sight-singing scoring technology.

Methodological optimisation of traditional teaching in music smart classrooms
Building a smart platform based on music teaching and learning

Information technology provides technical support for the generation and development of smart classrooms. The system architecture of the smart classroom informationization platform includes three parts: “cloud, network and end” [25]. Figure 1 visualizes the system architecture of the smart classroom informationization platform in the form of pictures, reflecting the relationship between “cloud”, “network” and “end”.

Figure 1.

Intelligent classroom information platform architecture

The “cloud” is the cloud platform, which provides cloud functions, support platforms, resource services, data processing, learning services and so on. Before class, the cloud platform provides music, pictures, videos and other resources for music teachers to make exquisite teaching courseware and also provides students with related music learning resources so that they can make a simple preview of the class content. Before, during, and after class, students’ operational data on the terminal are also calculated, organized, stored, and fed back to teachers by the cloud platform, which is conducive to teachers’ better understanding of the students’ learning situation.

The “network” is the micro cloud server, which provides local network, storage and computing services and facilitates the localization of recorded music lessons directly. The micro cloud server is a bridge connecting the cloud platform with the teacher’s and student’s sides. It transmits data collected from the mobile terminal to the cloud platform, which analyzes the data and transmits it to the mobile terminal through the micro cloud server.

The “end” is the end application tool, that is, the mobile terminal. Common mobile terminals include smartphones, tablet PCs, etc. In school teaching, tablet PCs are used as the main terminal equipment, including the student side and the teacher side. The teacher part provides the creation of microclasses, music lessons, grading and communication, import and animation of PPT, uploading of videos, posting of assignments, answering of questions, individual tutoring and so on. The student side can receive and manage assignments and micro-lessons, teacher-student interaction, and student-student interaction.

Design of Music Teaching System in Smart Classroom

Music Smart Classroom is a learning system composed of IT elements, people, and their activities. The overall framework of the smart classroom teaching system consists of four elements: teaching activity flow, teaching application support, dynamic evaluation and analysis, and resource management and service.

The teaching activity process is mainly composed of three links: before, during, and after the music class. The pre-class session of Music Smart Classroom includes the delivery of relevant audio and micro-lessons and then the assessment and analysis of students’ micro-lesson learning to understand their understanding and mastery of the music they have learnt. The middle of the lesson includes a video introduction, group work to complete the musical work, classroom display of the content of the investigation, the music teacher asking the appropriate questions and guiding the students to solve the problem, summing up and improving. The after-class session includes completing the after-class homework assigned by the teacher, recommending music learning materials, and reflecting and evaluating.

Teaching application support mainly involves hardware and software. Hardware is mainly all kinds of mobile terminal devices, for example, smart phones, tablet PCs, etc. The software mainly includes all kinds of learning APPs, providing learning, communication, management and application functions of smart terminals, such as Rain Classroom.

Dynamic evaluation and analysis is the core function of the smart classroom, which uses accompanying data collection to collect and analyse data on the whole teaching process before, during and after class, and conducts a comprehensive and multi-faceted assessment of learning and teaching in terms of formative, summative and diagnostic aspects.

Resource management and services provide the basis for smart classroom content, which is an important support for smart classroom teaching. It mainly includes music electronic teaching materials, curriculum standards, micro-teaching, multimedia courseware, and other resource libraries. Intelligent learning resource services such as music resource subscription and automatic push can also be provided according to the needs of teachers and students.

Intelligent Teaching Environment Construction

There is a significant difference between a music-smart classroom and a traditional classroom teaching environment. The music smart classroom environment includes hardware facilities such as tablets for both teachers and students, movable tables and chairs, and interactive whiteboards, in addition to teaching aids such as pianos, percussion instruments, and multimedia. Table 1 shows how the music smart classroom complements traditional classroom teaching equipment, and Figure 2 visualizes the schematic diagram of the smart classroom environment. The smart classroom should also keep the temperature and humidity suitable, the classroom should be bright and spacious, and the colour scheme should be reasonable so as to provide a comfortable physical environment for efficient teaching and learning activities.

Figure 2.

Intelligent classroom environment diagram

Music wisdom classroom and tradition teaching equipment

Music teaching mode Tradition hall Smart hall
Teaching facilities Fixed table and chair, blackboard, stage computer, projector, stereo, piano, etc Interactive electronic whiteboard, security equipment, pickup equipment, video camera, infrared tracking device, wireless broadband, top broadband, NFC, various teaching appies, various Musical Instruments, etc
Analysis of Teaching Effect of Music Smart Classroom in Colleges and Universities
Teaching case study research design

Survey Objects

The questionnaire survey was conducted at a university in a specific city and was combined with music classes organized by the school. This questionnaire was mainly targeted towards second-year students majoring in music. For the sake of the questionnaire’s comprehensiveness, effectiveness, and practicality, a total of 200 students were surveyed in this paper.

Survey Method

This study first aims to understand the teaching process of the previous music classroom and observe the teaching process of the music wisdom classroom by listening to lectures.

Afterwards, further investigations on the learning effect after the lectures were conducted, and an interview method was used to supplement the questionnaire survey of the students.

Overall Design of the Research Questionnaire

In order to be able to have a more in-depth and comprehensive understanding of the teaching needs of students in the subject of music, as well as the pedagogical effects of smart classroom technology on the music classroom, this paper designed a questionnaire survey for students. The questionnaire is composed of 30 questions and consists of four parts: a technology application survey, a learning experience survey, a learning content and process survey, and a teaching effect survey.

Through the survey of these four parts, we aim to understand the student’s feelings after practicing and the learning effect of the situation. Among them, questions 1-8 in the first part are mainly about the technical aspects of the tablet, cloud platform, intelligent terminal, etc., to understand whether students can adapt to this new type of teaching equipment. Questions 9-16 in the second part are mainly about the learning experience, with the aim of determining how students feel about using tablets for learning. Questions 17-24 in the third part are mainly about students’ feelings about teaching during the three sessions before, during, and after class, in order to understand their learning effect in this teaching method. Questions 25-30 in the fourth part are mainly about the learning effect of students after the practice, as well as students’ recognition and acceptance of this mode of teaching, in order to improve and teach better in the future.

After the practice, a total of 200 student questionnaires were distributed using smart classroom technology, all of which were collected and valid. The questionnaires were divided into five levels of satisfaction with the effectiveness of smart classroom teaching: very satisfied (5 points), satisfied (4 points), average (3 points), dissatisfied (2 points) and very dissatisfied (1 point).

Analysis of teaching case study results

The questionnaire on how music smart classrooms impact teaching is comprised of four sections: technology application survey, learning experience survey, learning content and process survey, and teaching effect survey. The purpose is to understand a series of issues, such as students’ experiences with this new type of teaching equipment, the learning process, and the teaching effect produced. Therefore, the investigation and analysis will also be carried out from these four aspects. The survey of 200 students’ satisfaction with the teaching of music smart classrooms is shown in Figure 3.

Figure 3.

The education satisfaction survey of music wisdom classroom

As can be seen from the figure, the 200 students were highly satisfied with the evaluation of the music smart classroom in the four dimensions, and the evaluation satisfaction of the 30 questionnaire questions ranged from 4.35 to 4.6 points, and the rate of students who were very dissatisfied with the smart classroom was 0. For the four dimensions of the application of technology, the learning experience, the learning content and process, and the teaching effect, the students’ average satisfaction with them was 4.466, 4.507, 4.466, 4.50. A small number of students disliked the teaching mode, and it was learnt through face-to-face interviews that these students indicated that they had low self-control, could not control themselves well when exposed to modern mobile devices, and might not be able to complete the teaching requirements seriously. However, they also emphasised that this method of teaching enhanced their interest in music lessons to some extent.

Recommendation of music learning materials based on smart classroom
User-based collaborative filtering recommendation algorithm

The core idea of user-based collaborative filtering recommendation is to recommend resources based on users’ “similar interest preferences” [2627]. In the recommendation system, according to the user record data, we find other users with similar interests and preferences to the target user, and then recommend the resources preferred by other users to the target user. Based on the user’s collaborative filtering algorithm, the target user’s recommended resources are based on their similar interests and preferences. The algorithm is relatively simple to implement, and the use of deployment is more flexible. Figure 4 illustrates the basic framework for collaborative filtering.

Figure 4.

Collaborative filtering basic framework

Constructing a scoring matrix based on user data

The user rating model is represented by the matrix R(m,n), in which the rows represent users and the columns represent items. The total number of users is represented by m, and the total number of items is represented by n in the matrix R(m,n). The element in the matrix is represented by R(i,j), which reflects the rating value of user i for item j. The user-item matrix is shown in Table 2.

User - project matrix

R(m,n) Project 1 Project j Project n
User 1 R(1,1) R(1,j) R(1,n)
User i R(i,1) R(i,j) R(i,n)
User m R(m,1) R(m,j) R(m,n)
Nearest neighbour user generation

This stage is the core part of the algorithm, in which the closest set of users is matched by calculating similarity. For the calculation of similarity, the following three similarity algorithms are generally applied:

Pearson, the basic principle of the algorithm is to calculate the user correlation coefficient first, and then analyse the user similarity according to the coefficient. Here, set Us, t to represent the user set common to user s and user t, R represents the rating item r(i,j) represents the rating item in the rating matrix n*m, where i ∈ 1…n, j ∈…m.

Pearson correlation coefficient: sim(s,t)=nUs,t(rs,nr¯s)(rt,nr¯t)nUs,t(rs,nr¯s)2nUs,t(rt,nr¯t)2 Where sim(s, t) represents the similarity between user s and user t, and rs¯ represents the average score of user s.

Cosine similarity calculation, the algorithm in which a vector represents the user’s rating, the user’s rating of the project can be created a n-dimensional vector, the angle between the vectors is used to measure the level of similarity, and the cosine of the angle is directly proportional to the similarity between the users.

Cosine similarity: sim(s,t)=s·ts*t= stti si2 ti2 Where s and t represent the scoring vectors corresponding to resource s and resource t, and s represents the absolute value of the square root of the dot product of the vector itself.

Modified cosine similarity calculation, which is an improved method of the above cosine similarity calculation, in order to compensate for the scoring habits of different users. The modified cosine similarity calculation is able to exclude the user’s average score for the item, thus making the algorithm more reasonable and applicable and reducing the user scoring error.

Modified Cosine Similarity: sim(s,t)=nUs,t(rs,nr¯s)(rt,nr¯t)nUs(rs,nr¯s)2nUt(rt,nr¯t)2

Generating Recommendation Lists to Implement Recommendations

Based on the above process, the rating data of K neighbouring users can be obtained, the rating is used as a reference to predict the rating value of the unrated items, and a TOP-N recommendation list is generated to recommend the items with the top N ratings.

User-based collaborative filtering recommendation effect analysis

In order to verify the effectiveness of the model, the accuracy rate, recall rate, and F-value are chosen as the evaluation indexes of the music teaching resources recommendation system in this paper in this section. Its calculation formula is as follows: precision=TPTP+FPrecall=TPTP+FNF=2×precision×recallprecision+recall

Fig. 5 shows the evaluation measurement results of this paper’s model in the case of the different number of recommendations. As can be seen from the figure, with the increase in the number of recommendations, the recommendation precision rate, recall rate and F-value of this paper’s model show a decreasing trend, and when the number of recommendations is 50, the determination values are 0.50, 0.41 and 0.38 respectively.

Figure 5.

Evaluation results of different recommendations

For five students (denoted as A-E in order) at a university, the model in this paper calculates the weighted average scores of their music material recommendations. The recommended music teaching materials are: “Sight Singing and Ear Training”, “Music Theory”, “Choral Conducting”, “Improvisation and Accompaniment”, “Learning to Compose”, and “Music Editing”, which are recorded in order as the music teaching material numbers 1-6. The weighted scores of the music teaching materials for the five students are shown in Figure 6.

Figure 6.

The results of the weighted scores of the students’ music teaching data

As can be seen from the figure, the recommendation model of this paper for different students can be calculated by the nearest user data to get their highest rated music recommended teaching materials. For example, student A, the highest weighted rating Top-3 in order of “learning to compose”, “sight-singing and ear training” and “music editing”, whose weighted rating is 91.96, 90.94, 90.84, so the system in this paper will prioritize the three music learning materials recommended to A students.

Algorithm for assessing sight-singing ability based on music smart classroom
Pitch feature extraction model

The evaluation of sight-singing in this study is based on MIDI sheet music files as a standard reference, and MIDI describes the notes in the sheet music in terms of pitch values and duration, so it is necessary to extract the pitch characteristics and their duration in the audio of sight-singing. The MIDI files utilise the Semitone value to represent the Pitch of the note, and the Semitone and the fundamental frequency have a correspondence expressed in the following equation: Pitch=69+12*log2f0144 $$Pitch\, = 69 + 12*{\log _2}{{f0} \over {144}}$$

Where 69 is the semitone value corresponding to the international standard tone A, f0 represents the fundamental frequency, and 144 is the frequency difference between the two semitones. The YIN algorithm extracts the fundamental period of each frame in the audio and then takes the reciprocal of the fundamental period to obtain the fundamental frequency, which is then converted to the number of semitones in MIDI using the above formula.

In the fundamental extraction based on the YIN algorithm, the audio signal is divided into frames, and the pitch sequence we get is also in frames. Assuming that each singer is aiming to sing the music score accurately, the wild spots that exist should not last too long and should be preceded and followed by a smooth sequence of pitch sub-sequences. In fact, by analysing the duration of wild spots in the pitch sequences of a large number of sight-singing audios, it was found that a large number of wild spots appeared in the middle of two smooth signals and lasted for only 1-2 frames. Therefore, we can smooth the pitch sequence as follows:

Frames with equal and close pitch values in the pitch sequence are treated as a subsequence, and their frame counts are counted.

Iterate through the number of frames in each pitch subsequence and find the wild point whose frame number is between 1 and 2 and whose pitch frames before and after it are greater than 2.

Set the pitch value corresponding to the found wild point as the average value of the pitches of the subsequence before and after it.

Apply median filtering to the optimised pitch sequence.

Soprano feature matching algorithm
DTW Algorithm Improvement

In this paper, we make the following two improvements to the traditional DTW algorithm [2829], assuming that the sequence of pitches to be matched and the template’s pitch sequence are X,Y respectively: where p denotes the pitch, t denotes the number of frames, k is the number of consecutively equal sub-sequences of pitches in the sequence to be matched, and v is the number of notes in the template. The expression is: X((p1,t1),(p2,t2),(p3,t3),,(pk,tk))Y((p1,t1),(p2,t2),(p3,t3),,(pv,tv))

According to the above method replace the frame matching with matching of sub-sequences, i.e. modify the modification distance calculation formula as: dist(xi,yj)=(pitipjtj)2

Modify the path selection strategy, i.e., modify the formula to: w(xi,yj)=min(w(xi1,yj)+dist(xi1,yj)w(xi1,yj1)+dist(xi1,yj1))

Pitch Sequence Matching

The duration of a note in a score is defined relative to the duration of a beat, e.g., the duration of a quarter note is one-quarter of the duration of a beat in the current score, and the tempo of the score determines the duration of a beat. The tempo is the measurement of how quickly or slowly a song is sung and is typically recorded in terms of the number of beats per minute, the opposite of which is the duration of a beat (in minutes).

In the actual singing process, the time of each beat is grasped by the singer himself, and it is not easy to precisely sing the standard time of each note. Even when the same person sings the same song several times, there are differences in individual notes and overall rhythm. Therefore, when matching the pitch sequence with the template, it cannot strictly correspond to the time, but needs to perform dynamic regularization operations such as temporal offset and scaling on the pitch sequence. In this paper, we use the DTW algorithm from the previous section to implement template matching.

Scoring techniques for sight-singing

The distance of the sequence is obtained by the DTW method, and normalised to between [0,1]. The singer’s score can be calculated, but the main purpose of this paper’s scoring of sight-singing is to find an objective measure of the learner’s proficiency indicators of the mastery of the musical skills in sight-singing by comparing them and then to calculate the overall performance score based on the synthesis of each indicator. For the convenience of describing the calculation of each indicator, we represent the matching relationship between each note in the template and the pitch sequence as follows: (pi,ti)=(p(k+1),p(k+2),,p(k+mi)) where pi, ti denote the pitch and number of frames of the i rd note, and p(k + 1), p(k + 2),…, p(k + mi) is the sequence of pitches matching that note, a subsequence of mi consecutive frames starting at frame k. Then the following three evaluation metrics are defined:

Timing Correctness: a note is considered to be correctly timed singing when the absolute difference between the note and its corresponding pitch sequence frame number is no more than 0.3 of the note’s frame number, calculated according to the following formula: dai={ 1| timi |ti<0.30Other

The overall hourly correct rate is: Da=1ni=1ndai

Pitch Correctness: Calculate the mean value pmi of the pitch of the subsequence corresponding to the note, and then judge whether the pitch of the note is sung correctly or not according to whether it is equal to the pitch of the note. The calculation is as follows: pai={ 1pi==pmi0Other

Overall pitch correctness was: Pa=1ni=1npai

Smoothness of breath: mainly refers to the stability of pitch when singing, this paper uses the degree of dispersion of the pitch value in the subsequence corresponding to each note to judge whether the breath of the singer is smooth. The smoothness of a single note is: vai=1stdimaximini

For the i st note, stdi, maxi, and mini are the standard deviation, maximum value, and minimum value of subsequence p(k + 1), p(k + 2),…, p(k + mi), respectively.

Overall smoothness: Va=1ni=1nvai

The final score of the performance was the mean of the duration accuracy and pitch accuracy, and then the breath smoothness was used as a weighting factor, calculated by the formula: result=(Da+Pa)*Va

Analysis of the Teaching Effect of Intelligent Classroom and Application of Intelligent Scoring Technology
Effectiveness of teaching music sight-singing to students in smart classrooms

In order to accurately understand the assessment effect of the Music Smart Classroom on the improvement of students’ sight-singing ability, this study took 50 students from the 200 students mentioned above as an experimental class (denoted as T) and a control class (denoted as CK) that used the sight-singing teaching mode in the traditional music classroom and conducted a test of their musical aural ability. The test evaluation criteria included pitch, rhythm, sight-singing ability, music notation, and polyphonic music perception.

Independent t-tests were conducted on the test data of the two groups of students before and after the experiment. Table 3 shows the results of the statistical analysis of the scores of the sight-singing evaluation indexes of the experimental class and the control class before and after the experiment. Figures 7 and 8 show the distribution of the results of each evaluation index in the experimental and control classes after the experiment, respectively.

Figure 7.

The results of the T class were distributed

The results of the statistical analysis of the evaluation index

Project Pre Post
CK class T class P value CK class T class P value
Sound 65.15 66.03 0.195 68.81 76.53 0.016
Rhythm 66.32 66.79 0.563 70.42 76.79 0.009
Vision 63.84 63.47 0.224 65.71 73.53 0.021
Music notation 69.81 70.03 0.317 72.19 79.11 0.027
Multisound music 68.24 68.33 0.268 71.56 79.72 0.013

As can be seen from the table, before the experiment, the experimental group and the control group in the five evaluation indexes of sight-singing score level difference is relatively small, such as in the pitch, the average score of the two classes of students was, respectively, 65.15, 66.03, the p-value of 0.195, greater than 0.05, there is no significant difference. After the experiment, the scoring ability of the evaluation indexes of the two classes showed a large gap. The average scores of the experimental class in intonation, rhythm, sight-singing ability, music notation, and polyphonic music perception ability increased by 7.72, 6.37, 7.82, 6.92, and 8.16 points respectively compared with the control class, with a P-value of less than 0.05, which indicates that the smart classroom has a significant promotion effect on students’ ability of sight-singing.

Combining the data analyses of the two figures, it can be seen that after the experiment, the scores of the two classes are in line with the normal distribution. The score interval for the five competence evaluation indexes of the experimental class is distributed between 60 and 95, with most students concentrated in the range of 70 to 90 points. In contrast, the range of scores in the control class was low overall, and there were more students with low scores (less than 60).

Figure 8.

The results of the CK class were distributed

Differences between Artificial Intelligence Technology and Traditional Sight-Singing Scoring

In this section, the students of the above experimental class are scored on their level of pitch, rhythm, sight-reading ability, music notation and polyphonic music perception by the intelligent scoring system based on pitch feature extraction with the DTW algorithm designed in this paper. The accuracy of the scoring algorithm proposed in this paper is assessed by comparing it to the scores provided by the teacher. Figure 9 shows the results of the comparison between the intelligent algorithm (AI) scoring and the teacher’s scoring.

Figure 9.

The intelligent algorithm scores compare with the teacher’s score

The scoring results of this paper’s algorithm on the five indicators for 50 students show that the error between the predicted score and the true score of the proposed algorithm is smaller compared to the teacher’s score. For example, the difference between the intelligent scoring and the teacher’s scoring in terms of pitch is from 0.036 to 4.903. And in most of the student sight-singing indicators, the difference between the scores predicted by this paper’s algorithm and the teacher’s scoring is up to 3 points.

Conclusion

This study uses artificial intelligence technology to improve the traditional music teaching mode and design one based on a smart classroom. A collaborative filtering recommendation algorithm is used to recommend effective music learning materials for students. The student’s mastery of musical skills in sight-reading was measured by extracting their treble features and combining them with sight-reading scoring methods.

The questionnaire of this paper found that the average satisfaction of students with music smart classroom technology applications, learning experiences, learning content and process, and teaching effect are 4.466, 4.507, 4.466, and 4.50, respectively. When the number of recommendations in this paper reaches 50, the recommendation model continues to perform well, providing optimal recommendations of music materials for a variety of students. Following the experiment, the average scores of the students in the personalized recommendation-based music smart classroom teaching were 76.53, 76.79, 73.53, 79.11, and 79.72 in the areas of pitch, rhythm, sight-sheet ability, music notation, and polyphonic music perception. The p-values between these students and those in the control class were all less than 0.05, indicating significant differences. The predicted scores of the intelligent scoring algorithm in this paper are more accurate than the teachers’ scores.

Lingua:
Inglese
Frequenza di pubblicazione:
1 volte all'anno
Argomenti della rivista:
Scienze biologiche, Scienze della vita, altro, Matematica, Matematica applicata, Matematica generale, Fisica, Fisica, altro