Détails du magazine
Format
Magazine
eISSN
2444-8656
Première parution
01 Jan 2016
Périodicité
2 fois par an
Langues
Anglais
Accès libre

# The Mathematical Analysis Model of Educational System in Music Courses in Colleges and Universities

###### Accepté: 17 Apr 2022
Détails du magazine
Format
Magazine
eISSN
2444-8656
Première parution
01 Jan 2016
Périodicité
2 fois par an
Langues
Anglais
Introduction

In recent years, digital music resources have grown rapidly and have been widely used in teaching. Teachers began to use digital music resources to assist teaching and achieved good learning results. However, with the increasing number of digital music resources, effectively managing these resources has become an urgent problem in music-assisted teaching [1]. An important function of digital music resource management is resource retrieval. Scholars have done extensive research on music resource retrieval methods and proposed content-based music retrieval at home and abroad. The method realizes retrieval methods such as content-based music retrieval, humming-based music retrieval, and melody-based music retrieval by recording the acoustic signal, rhythm, or melody. The music itself is very emotional. Therefore, the emotion-based digital music retrieval method conforms to the essential characteristics of music and is also the user’s demand. However, emotion cannot be accurately measured and is highly subjective. So how to quantify emotion is the core issue of music emotion retrieval technology. Fuzzy mathematics is some fuzzy concept that people encounter in daily life. At present, the principles of fuzzy mathematics are the most widely used in the two aspects of object classification and individual evaluation [2]. At the same time, fuzzy mathematics has applications in financial management, artificial intelligence, medical diagnosis and treatment, environmental monitoring, and other fields. Fuzzy mathematics can be applied to the fields related to classification and degree judgment because the result of the membership function can express the relative degree of a certain entity belonging to a certain type. This also reflects the multi-faceted nature of the individual. If the concept of fuzzy mathematics is introduced into music emotion retrieval, people can objectively describe how sad or happy a piece of music is. On this basis, an emotional space compatible with fuzzy mathematics is found from the classification model of music emotional classification. The article uses membership functions to classify selected music fragments into different emotion types based on fuzzy mathematics to achieve music emotion retrieval. Finally, experiments have verified the effectiveness of this retrieval method.

Emotional characteristics and classification of music
The emotional characteristics of music

In addition to music, including rhythm, pitch, tempo, lyrics, and other factors related to its composition, it also includes extended factors such as emotion, visual image, and imaginary space. In a sense, the emotion of the music is objective. Because the emotion of music mainly reflects the subjective emotional experience of the creator when creating it, this emotional experience has been positioned by the creator for the listener [3]. Different listeners have different emotional experiences of the same music in different mental states, but this difference will not be very different from the emotional state of the original creator. In general, the objectivity of the emotional tone of the music is a major premise for us to achieve fuzzy retrieval.

Emotional classification of music

The currently recognized emotion classification models are Ressell’s emotion space and Thayer’s improved emotion space in the classification of music emotions. The “A” in Russell’s emotional space refers to activation, called “activity” in psychology. We use it to measure whether people’s emotions are high. “V” means valence. It is called “inducing force” in psychology. We use it to measure whether people feel happy. Russell found that people’s emotional preferences have an “inverted U-shaped” relationship with A, and V has a linear relationship with people’s emotional preferences [4]. At the same time, many emotions can also be combined by the two-dimensional value of Please Nt Ness and Activation. Thayer divides the emotions of music into four major categories: joy, sadness, Clam, and anxiety. Here we use Thayer’s emotion vector space model and retain the functional relationship between A and V and emotion described by Russell and their combined characteristics.

Emotional fuzzy retrieval
The basic idea of fuzzy emotional retrieval

In this search method, the search terms we provide to users are happy, anxious, sad, and peaceful. The process of fuzzy retrieval mainly involves the representation of fuzzy sets, the determination of membership functions, the ranking of retrieval results and the extraction of retrieval keywords. Suppose domain U = {x1, x2,⋯, xi }, xi represents a specific piece of music. At the same time we divide this music into 4 different emotional types: happy, anxious, sad, and calm [5]. These types are represented as 4 fuzzy subsets Ai ∈(U)(i = 1, 2, 3, 4) on U. The four fuzzy subsets are described by indicators A and V. Since the emotion of music itself is a relatively complicated fuzzy set, the two indicators A and V for each fuzzy set Ai are also fuzzy sets. Ai = (Ai1, Ai2)(i = 1, 2). The design of membership function is based on the relationship between A and V and emotion P. In fuzzy retrieval, the ranking of retrieval results is determined according to the membership value of individual resources to each fuzzy set. The sorting of the fuzzy set resources of the same type should be arranged in descending order of the membership value.

Membership function design

The abscissa Valence is abbreviated as V, and the variable name is denoted as v. The range from left to right is [−0.33, 0.33]. The ordinate activation is abbreviated as A, and the variable name is denoted as A. The value range from bottom to top is [−0.26, 0.26].

Membership function of A variable

A’s value is close to 0.26 when the emotion is very exciting, such as “excitement.” When the emotion is generally pleasant, the value of A is around 0.1. When people’s emotional preference is in the emotions of fear, anger, and tension, A’s value is 0.23~0.16. When in sadness, A’s value is mainly in the region of −0.03~−0.20. When the A value is between −0.23 and −0.16, it can be concluded that the emotional state is “calm.”

First, determine the corresponding membership function when the emotion is “happy” and mark its membership function as A11. From the above, it can be determined that the fuzzy set “Happy” domain is [−0.26, 0.26]. The study found that vocabulary similar to “happy” is almost concentrated in a certain range, so it is determined that the overall structure of the membership function A11 should be intermediate. Secondly, determine the regions with membership degrees of 1 and 0. The area with a membership degree of 1 represents that when the A value of a certain piece of music falls in this area, the membership degree of emotional “happiness” is 1. Therefore, the membership domain with a degree of membership of 1 is determined to be (0.13, 0.07). The membership domain with a degree of 0 is [−0.26, 0.02). Then determine the transition zone. The transition zone means that when the A value falls into this area, it is difficult to say whether the emotion of the music is “happy” or not. Finally, the functional form of A11 can be obtained. $A11={ 0,−2.6≤a<0.072[ a−0.07/002 ]2,0.07≤a<0.081−2[ (a−0.09)/0.02 ]2,0.08≤a<0.091,0.09≤a<0.131−2[ (a−0.13)/0.04 ]2,0.13≤a<0.152[ (a−0.17)/0.04 ]2,0.15≤a<0.170,0.07≤a<0.241,0.24≤a<0.26$ {A_{11}} = \left\{ {\matrix {0,} & { - 2.6 \leqslant a < 0.07} \\ {2{{\left[ {a - 0.07/002} \right]}^2},} & {0.07 \leqslant a < 0.08} \\ {1 - 2{{\left[ {(a - 0.09)/0.02} \right]}^2},} & {0.08 \leqslant a < 0.09} \\ {1,} & {0.09 \leqslant a < 0.13} \\ {1 - 2{{\left[ {(a - 0.13)/0.04} \right]}^2},} & {0.13 \leqslant a < 0.15} \\ {2{{\left[ {(a - 0.17)/0.04} \right]}^2},} & {0.15 \leqslant a < 0.17} \\ {0,} & {0.07 \leqslant a < 0.24} \\ {1,} & {0.24 \leqslant a < 0.26} \\ \endmatrix } \right

A similar method can be used to obtain the membership function A12 where the emotion is “anxiety,” the membership function A13 where the emotion is “sad,” and the membership function A14 where the emotion is “quiet.” $A12={ 0,−2.6≤a<−0.062[ a+0.06/0.09 ]2,−0.06≤a<−0.021−2[ (a−0.03)/0.09 ]2,−0.02≤a<−0.031,0.03≤a<−0.080,0.08≤a<0.132[ (a−0.13)/0.04 ]2,0.013≤a<0.151−2[ (a−0.17)/0,04 ]2,0.15≤a<0.171,0.17≤a<0.240,0.24≤a<0.26$ {A_{12}} = \left\{ {\matrix {0,} & { - 2.6 \leqslant a < - 0.06} \\ {2{{\left[ {a + 0.06/0.09} \right]}^2},} & { - 0.06 \leqslant a < - 0.02} \\ {1 - 2{{\left[ {(a - 0.03)/0.09} \right]}^2},} & { - 0.02 \leqslant a < - 0.03} \\ {1,} & {0.03 \leqslant a < - 0.08} \\ {0,} & {0.08 \leqslant a < 0.13} \\ {2{{\left[ {(a - 0.13)/0.04} \right]}^2},} & {0.013 \leqslant a < 0.15} \\ {1 - 2{{\left[ {(a - 0.17)/0,04} \right]}^2},} & {0.15 \leqslant a < 0.17} \\ {1,} & {0.17 \leqslant a < 0.24} \\ {0,} & {0.24 \leqslant a < 0.26} \\ \endmatrix } \right. $A12={ 0,−0.26≤a<−0.162[ (a+0.16)/0.06 ]2,−0.16≤a−0.131−2[ (a+0.1)/0.06 ]2,−0.13≤a<−0.11,−0.1≤a<−0.061−2[ (a+0.06)/0.09 ]2,−0.06≤a<−0.022[ (a−0.03)/0.09 ]2,−0.02≤a<−0.030,0.03≤a<0.26$ A_{12} = \left\{ {\begin{array}{*{20}c} {0,} & { - 0.26 \leqslant a < - 0.16} \\ {2\left[ {(a + 0.16)/0.06} \right]^2 ,} & { - 0.16 \leqslant a - 0.13} \\ {1 - 2\left[ {(a + 0.1)/0.06} \right]^2 ,} & { - 0.13 \leqslant a < - 0.1} \\ {1,} & { - 0.1 \leqslant a < - 0.06} \\ {1 - 2\left[ {(a + 0.06)/0.09} \right]^2 ,} & { - 0.06 \leqslant a < - 0.02} \\ {2\left[ {(a - 0.03)/0.09} \right]^2 ,} & { - 0.02 \leqslant a < - 0.03} \\ {0,} & { 0.03 \leqslant a < 0.26} \\ \end{array} } \right.\ $A14={ 0,−0.26≤a−0.171−2[ (a+0.17)/0.07 ]2,−0.17≤a<−0.142[ (a+0.1)/0.07 ]2,−0.14≤a<−0.10,−0.1≤a<0.26$ {A_{14}} = \left\{ {\matrix {0,} & { - 0.26 \leqslant a - 0.17} \\ {1 - 2{{\left[ {(a + 0.17)/0.07} \right]}^2},} & { - 0.17 \leqslant a < - 0.14} \\ {2{{\left[ {(a + 0.1)/0.07} \right]}^2},} & { - 0.14 \leqslant a < - 0.1} \\ {0,} & { - 0.1 \leqslant a < 0.26} \\ \endmatrix } \right.

Membership function of V variable

The V variable has a linear relationship with people’s emotional preferences. When the value of V increases from small to large, the order of the emotional state is sad, angry, calm, and happy. The V variable also has 4 fuzzy sets: happy, anxious, sad, and calm [6]. We denote its membership functions as A21, A22, A23 and A24 in turn. The value range of V is [−0.33, 0.33].

When the value of V is above 0.09, the emotion of “happy” will be more obvious as the value of V becomes larger. When the V value is around 0.1, the emotion tends to be calm. When V is between −0.16 and −0.05, the probability of “tension, anger, anxiety” is very high. When the V value is less than −0.16, the sentimental emotions will become more obvious as the V value becomes smaller.

The study found that A21 is a relatively large function. Above the value of 0.17, “happy” or emotions similar to happiness will appear. Then determine the domain of membership of 0 and 1. The membership domain with a degree of 0 is [−0.33, 0.09). The membership domain with a degree of membership of 1 is (0.19, 0.33]. The range of the transition zone is [0.09, 0.19], and the shape of the transition zone is linear. So finally, the membership function of the trapezoidal distribution is selected, and the function A21 is obtained. $A21={ 0,−0.33≤υ<0.09(υ−0.09)/0.10.09≤υ<0.190,0.19≤υ<0.33$ {A_{21}} = \left\{ {\matrix {0,} & { - 0.33 \leqslant \upsilon < 0.09} \\ {(\upsilon - 0.09)/0.1} & {0.09 \leqslant \upsilon < 0.19} \\ {0,} & {0.19 \leqslant \upsilon < 0.33} \\ \endmatrix } \right.

Also, according to the above method, the function expressions of A22, A23 and A24 can be determined. $A22={ 0,−0.33≤υ<−0.2(υ+0.2)/0.44,−0.2≤υ<−0.161,−0.16≤υ−0.06−υ/0.06,0.06≤υ<00,0≤υ<0.33$ {A_{22}} = \left\{ {\matrix {0,} & { - 0.33 \leqslant \upsilon < - 0.2} \\ {(\upsilon + 0.2)/0.44,} & { - 0.2 \leqslant \upsilon < - 0.16} \\ {1,} & { - 0.16 \leqslant \upsilon - 0.06} \\ { - \upsilon /0.06,} & {0.06 \leqslant \upsilon < 0} \\ {0,} & {0 \leqslant \upsilon < 0.33} \\ \endmatrix } \right. $A23={ 1,−0.33≤υ<−0.2(−0.16−υ)/0.04−0.2≤υ<0.160,−0.16≤υ<0.33$ {A_{23}} = \left\{ {\matrix {1,} & { - 0.33 \leqslant \upsilon < - 0.2} \\ {( - 0.16 - \upsilon )/0.04} & { - 0.2 \leqslant \upsilon < 0.16} \\ {0,} & { - 0.16 \leqslant \upsilon < 0.33} \\ \endmatrix } \right. $A24={ 1,−0.33≤υ<−0.04(υ+0.04)/0.44,−0.04≤υ<01,0≤υ<0.11(0.19−υ)/0.080.11≤υ<0.190,0.19≤υ<0.33$ {A_{24}} = \left\{ {\matrix {1,} & { - 0.33 \leqslant \upsilon < - 0.04} \\ {(\upsilon + 0.04)/0.44,} & { - 0.04 \leqslant \upsilon < 0} \\ {1,} & {0 \leqslant \upsilon < 0.11} \\ {(0.19 - \upsilon )/0.08} & {0.11 \leqslant \upsilon < 0.19} \\ {0,} & {0.19 \leqslant \upsilon < 0.33} \\ \endmatrix } \right.

Finally, Ai = (A1i + A2i) / 2(i = 1, 2, 3, 4) is obtained according to the realization method of the multi-factor comprehensive membership function.

Performance analysis of sentiment fuzzy retrieval
Experimental design and results

First, select experimental samples. We randomly selected 90 music samples. These music samples include 40 classic European and American music and 50 classic Chinese music [7]. The repertoire we have selected also includes various types of folk songs, country songs, pop music, rock music, classical music, etc.

Secondly, determine the membership value of each piece of music. First, determine the A and V values of each sample music, and then use the membership function to obtain the membership degrees of the above four emotions according to this basic judgment. A is 0.03, and V is 0.17. We come to its A11 = 1, A12 = 0, A13 = 0, A14 = 0, A21 = 0.8, A22 = 0, A23 = 0, A24 = 0.25. Further, obtain A1 = 0.9, A3 = 0, A4 = 0.13.

Build a simple retrieval system again. The establishment of the retrieval system is divided into two parts: the establishment of the data table and the establishment of the retrieval page [8]. The data table for fuzzy search is music1. It contains 5 attribute fields: title, happy, anxiety, sad, and Calm.

We use traditional methods to estimate the emotional types of these 90 pieces of music. Mark the corresponding data table as music2. Like music1, music2 also has 5 attribute fields: title, happy, animation, sad, and Calm. If a piece of music belongs to a certain emotional type, the corresponding emotional value is recorded as “1”; otherwise, it is recorded as “0”.

It is necessary to connect to the database when making the search page. The key is the setting of the retrieval language. The traditional retrieval method sqlselect * frommusic2wherehappy = 1˝. But for fuzzy search, it cannot be sqlselect * frommusiclwherehappy = 0˝. Because not all music that belongs to the emotion of “happy” with a membership value greater than 0 must be a happy emotion. Therefore, it is necessary to determine the scope of the degree of membership to make people feel the more obvious pleasant emotional experience [9]. The “expert assessment” method is usually used to determine the scope. Finally, it is determined that the critical value of “happy” is 0.3, so its sqlselect * frommusiclwherehappy =<= 0.3orderbyhappyldesc˝. Retrieval sentences of other emotional types are processed similarly in the same way.

Finally, collect and organize data. First, retrieve all music tracks with the emotion “happy” in the data table music1 and record the total number of retrieved records N1. Record the other 3 emotions in the same way. The data sheet music2 is similarly processed and marked as N2. The statistical results of the number of retrieved records are listed in Table 1.

Search results of music1 and music2.

n1 n2
Happy 33 30
Anxiety 14 8
Calm 26 22
Number of records 106 90

In addition, the retrieval ability and retrieval accuracy of this retrieval method is examined from the three aspects of recall, precision, and F value. Recall rate refers to the percentage of the detected related documents to all related documents in the exhaustive literature. We denote it as R. The precision rate refers to the percentage of the detected related documents to the total number of detected documents [10]. We will denote it as P. For example, the emotional type of “happy” in Table 1 is R=89.2%. “33” means the number of recorded music resources related to happiness; “37” means all of the music resources with a membership degree of “happy” greater than zero that can make people feel the emotion “happy” a little bit. The total number of records of music resources. Figure 1 and Figure 2 show the recall and precision of 4 different emotions obtained by the above two retrieval methods.

In general, the number of all relevant documents for a certain question cannot be accurately known, so the calculation results of R are generally approximate. However, if the specific information and sample number of all tested resources have been clarified in advance, the R-value of each emotion type can be obtained here [11]. Recall rate and precision rate indicate certain aspects of retrieval methods. But which retrieval method is better requires further integration of recall and precision. Therefore, the harmonic average F of P and R is introduced to examine the retrieval ability. Where F = 2 /(1/ R +1/ P), R ≠ 0, P ≠ 0. We can calculate the F-values of different emotion types in the two retrieval methods and get Figure 3.

Finally, the performance of these two retrieval methods is evaluated from a user-oriented perspective. Mainly examine the coverage rate C and the new rate N. Where C=Ru/U, U represents the total number of related documents known by the user. Ru represents the resulting number of the intersection of search results, and U. C represents the proportion of related documents known to the user found by the system [12]. In random interviews with 10 users, we asked them to choose the music that belongs to each emotion among the 90 music according to their ideas. For the emotion of “happy,” we need to ask 10 users to tell the music collection they think belongs to this emotion and mark them as u1, u2, u3, ⋯, u10 respectively. Finally, the set U (h) = u1u2u3 ∩⋯∩ u10 of “happy” emotions. The U sets of other types of emotions can be deduced by analogy. The new rate N = Rk /(Ru + Rk), Rk represents the collection of related documents previously unknown to the user returned in the search results. N represents the proportion of new related documents returned by the system. Finally, figure 4 and figure 5 are obtained after data collection and sorting.

Experimental data analysis
number of retrievals

The number of music records of the same emotion type in music1 in Table 1 is more than the number of music records in music2. The total number of retrieved items is 16 more than music 2. This shows that traditional methods of emotion retrieval ignore the diversity of music emotions, and fuzzy retrieval can better reflect the diversity of music emotions.

Sorting search results

The fuzzy search results are arranged in descending order according to the degree of membership from high to low. If the user needs to find all the music with the emotional “happy,” then the final result presented to the user is that the music with the membership degree of 1 is ranked first, the more happy music is ranked second, and the last one is the membership degree of 0.3. If it is a traditional retrieval mode, then the results retrieved by the user will not have this rule. It is only arranged according to filling in the datasheet music2.

Recall rate, precision rate, and F value

From Figure 1, the retrieval capability of fuzzy retrieval is beyond doubt. The recall rate is close to 90%. The recall rate depends on the precision rate. The increase in precision is conducive to the increase in recall. Therefore, in general, the retrieval ability of this retrieval method is relatively excellent.

It can be seen from Figure 2 that the average recall rate of traditional retrieval methods is only 72.3%. Compared with a fuzzy query, its recall rate is lower than 16%. And the emotion of anxiety and sadness have a certain similarity, so the probability of misjudgment is higher when non-quantitatively judging which type of emotion a certain piece of music belongs to. At the same time, the precision and recall of these two types of emotions will also impact [13]. Through comparison, it is found that the retrieval ability of fuzzy retrieval is much higher than that of ordinary non-quantitative queries.

It can be seen from Figure 3 that the F values of these two retrieval methods are both above 65%. For fuzzy retrieval, the F value of each emotion is above 93%. This is higher than the F value of the traditional retrieval method. Especially the emotion of “anxiety” and the emotion of “sad” have great similarities.

Coverage rate and the new rate

It can be seen from Figure 4 that fuzzy retrieval can cover more relevant resources that users already know. As “happy” and “sad” are two emotional music resources. Their coverage rate has reached 100%. The coverage rate of fuzzy retrieval of the intermediate type of emotion “anxiety” is much higher than traditional retrieval [14]. This also shows that fuzzy retrieval can better reflect the diversity of music emotions.

The N-value curve of fuzzy retrieval in Fig. 5 is located above the N-value curve of traditional retrieval. This means that the new rate of fuzzy retrieval is higher than that of traditional retrieval. From the user’s point of view, the fuzzy search can more include the user’s search needs on the one hand and can also provide users with some new and more relevant music resources outside of cognition.

Conclusion

The fuzzy retrieval method of music technically satisfies the diversity of music emotions. This makes the digital work of music closer to the essential characteristics of music. The experimental results prove that it is feasible and effective to realize the retrieval of music emotion based on fuzzy mathematics.

#### Search results of music1 and music2.

n1 n2
Happy 33 30
Anxiety 14 8
Calm 26 22
Number of records 106 90

Zou, H. & He, D. Technology sharing game from ecological perspective. Applied Mathematics and Nonlinear Sciences.,2021; 6(1): 81–92ZouH.HeD.Technology sharing game from ecological perspectiveApplied Mathematics and Nonlinear Sciences202161819210.2478/amns.2021.1.00018Search in Google Scholar

Abozaid, A., Selim, H., Gadallah, K., Hassan, I. & Abouelmagd, E. Periodic orbit in the frame work of restricted three bodies under the asteroids belt effect. Applied Mathematics and Nonlinear Sciences.,2020; 5(2): 157–176AbozaidASelimHGadallahKHassanI.AbouelmagdE.Periodic orbit in the frame work of restricted three bodies under the asteroids belt effectApplied Mathematics and Nonlinear Sciences20205215717610.2478/amns.2020.2.00022Search in Google Scholar

Gómez, R., & Nasser, L. Symbolic structures in music theory and composition, binary keyboards, and the Thue–Morse shift. Journal of Mathematics and Music.,2021; 15(3):247–266GómezRNasserLSymbolic structures in music theory and composition, binary keyboards, and the Thue–Morse shiftJournal of Mathematics and Music202115324726610.1080/17459737.2020.1732490Search in Google Scholar

Garani, S. S., & Seshadri, H. An algorithmic approach to South Indian classical music. Journal of Mathematics and Music.,2019; 13(2):107–134GaraniSSSeshadriH.An algorithmic approach to South Indian classical musicJournal of Mathematics and Music201913210713410.1080/17459737.2019.1604845Search in Google Scholar

Chuan, C. H., Agres, K., & Herremans, D. From context to concept: exploring semantic relationships in music with word2vec. Neural Computing and Applications.,2020; 32(4):1023–1036ChuanCHAgresKHerremansD.From context to concept: exploring semantic relationships in music with word2vecNeural Computing and Applications20203241023103610.1007/s00521-018-3923-1Search in Google Scholar

da Silva, R. S. R. On music production in mathematics teacher education as an aesthetic experience. ZDM.,2020; 52(5):973–987da SilvaRS ROn music production in mathematics teacher education as an aesthetic experienceZDM202052597398710.1007/s11858-019-01107-ySearch in Google Scholar

Roy, S., Biswas, M., & De, D. iMusic: a session-sensitive clustered classical music recommender system using contextual representation learning. Multimedia Tools and Applications.,2020; 79(33):24119–24155RoySBiswasMDeD.iMusic: a session-sensitive clustered classical music recommender system using contextual representation learningMultimedia Tools and Applications20207933241192415510.1007/s11042-020-09126-8Search in Google Scholar

Zulić, H. How AI can change/improve/influence music composition, performance and education: three case studies. INSAM Journal of Contemporary Music, Art and Technology.,2019; 1(2):100–114ZulićHHow AI can change/improve/influence music composition, performance and education: three case studiesINSAM Journal of Contemporary Music, Art and Technology201912100114Search in Google Scholar

Das, S., Bhattacharyya, B. K., & Debbarma, S. Building a computational model for mood classification of music by integrating an asymptotic approach with the machine learning techniques. Journal of Ambient Intelligence and Humanized Computing.,2021; 12(6):5955–5967DasSBhattacharyyaBKDebbarmaS.Building a computational model for mood classification of music by integrating an asymptotic approach with the machine learning techniquesJournal of Ambient Intelligence and Humanized Computing20211265955596710.1007/s12652-020-02145-1Search in Google Scholar

Santana Júnior, C. A. D., & Lima, S. R. D. Informational behaviour in Facebook focused on Brazilian popular music (BPM). Investigación bibliotecológica.,2019; 33(80):13–30Santana JúniorCADLimaSRDInformational behaviour in Facebook focused on Brazilian popular music (BPM)Investigación bibliotecológica20193380133010.22201/iibi.24488321xe.2019.80.57931Search in Google Scholar

Hong, J. W., Peng, Q., & Williams, D. Are you ready for artificial Mozart and Skrillex? An experiment testing expectancy violation theory and AI music. new media & society.,2021; 23(7):1920–1935HongJWPengQWilliamsD.Are you ready for artificial Mozart and Skrillex?An experiment testing expectancy violation theory and AI music. new media & society20212371920193510.1177/1461444820925798Search in Google Scholar

König, N., & Schredl, M. Music in dreams: A diary study. Psychology of Music.,2021; 49(3):351–359KönigNSchredlM.Music in dreams: A diary studyPsychology of Music202149335135910.1177/0305735619854533Search in Google Scholar

Panwar, S., Rad, P., Choo, K. K. R., & Roopaei, M. Are you emotional or depressed? Learning about your emotional state from your music using machine learning. The Journal of Supercomputing.,2019; 75(6):2986–3009PanwarSRadPChooKKRRoopaeiM.Are you emotional or depressed? Learning about your emotional state from your music using machine learningThe Journal of Supercomputing20197562986300910.1007/s11227-018-2499-ySearch in Google Scholar

El Naschie, M. S. On the Fractal Counterpart of C. Vafa‘s Twelve-Dimensional F-theory and the A. Schoenberg Twelve-tone Music Implicit in the Standard Model of High Energy Elementary Particles. International Journal of Innovation in Science and Mathematics.,2019; 7(5):222–230El NaschieMSOn the Fractal Counterpart of C. Vafa‘s Twelve-Dimensional F-theory and the A. Schoenberg Twelve-tone Music Implicit in the Standard Model of High Energy Elementary ParticlesInternational Journal of Innovation in Science and Mathematics201975222230Search in Google Scholar

Articles recommandés par Trend MD