1. bookAHEAD OF PRINT
Dettagli della rivista
License
Formato
Rivista
eISSN
2444-8656
Prima pubblicazione
01 Jan 2016
Frequenza di pubblicazione
2 volte all'anno
Lingue
Inglese
Accesso libero

The Stability Model of Piano Tone Tuning Based on Ordinary Differential Equations

Pubblicato online: 15 Jul 2022
Volume & Edizione: AHEAD OF PRINT
Pagine: -
Ricevuto: 17 Feb 2022
Accettato: 22 Apr 2022
Dettagli della rivista
License
Formato
Rivista
eISSN
2444-8656
Prima pubblicazione
01 Jan 2016
Frequenza di pubblicazione
2 volte all'anno
Lingue
Inglese
Introduction

The piano tone signal is composed of fundamental tones and overtones. The fundamental pitch determines the pitch of a piano tone signal. Therefore, detecting the pitch period is the key to identifying piano notes [1]. The detection method of pitch period mainly includes frequency domain identification and time-domain identification. Short-time ODEs are a classic time-domain detection algorithm. It is simple to calculate and widely used, but the algorithm will suffer from fundamental-tone octave or half-frequency errors.

Since the calculation removes the part where the energy of each note is relatively concentrated in the central area, the calculation amount can be reduced, and the calculation speed can be accelerated [2]. At the same time, the algorithm avoids the above errors to a certain extent and improves the recognition rate. But this algorithm still has certain limitations. Some scholars have proposed two three-level center clipping and autocorrelation processing. However, this method increases the amount of computation and is not suitable for fast computation scenarios. In addition, when ordinary differential equations estimate the pitch period, the phenomenon of pitch period jumps between frames occurs [3]. The identification process is disturbed by half frequency points, double points, and random error points. Some scholars have proposed to apply the zero-insertion algorithm and the corresponding low-pass filter to the ordinary differential equation of three-level clipping. Some scholars have proposed to combine the three-level center clipping autocorrelation function with the cyclic mean amplitude difference function. The above algorithm can achieve a relatively satisfactory recognition rate when dealing with music with a moderate rhythm. However, the recognition rate drops rapidly when dealing with fast-paced music. This paper presents ordinary differential equations. The goal is to find ordinary differential equations on a smaller scale to accommodate fast-paced music. The algorithm avoids the missed detection, false detection, or recognition error of fast-paced music by traditional algorithms to a certain extent. This method can significantly improve recognition accuracy.

Autocorrelation function for three-level center clipping

Suppose zi(x) is the i frame signal of the time series w(t) of the musical sound signal after windowing and sub-framing. The subscript i refers to the i frame [4]. The pitch period is determined by comparing the similarity between the original signal and its delayed signal. Two signals have maximum similarity if the delay is equal to the pitch period. We directly find the distance between the two maxima of the short-term autocorrelation function as the initial value of the pitch period. CL is the clipping level. The mathematical relationship for the center clipping function C[zi(x)] is as follows: C[zi(x)]={zi(x)CLzi(x)>CL0|zi(x)|CLzi(x)+CLzi(x)<CL C\left[ {{z_i}\left( x \right)} \right] = \left\{ {\matrix{ {{z_i}\left( x \right) - {C_L}} & {{z_i}\left( x \right) > {C_L}} \cr 0 & {\left| {{z_i}\left( x \right)} \right| \le {C_L}} \cr {{z_i}\left( x \right) + {C_L}} & {{z_i}\left( x \right) < - {C_L}} \cr } } \right.

The input and output functions of the three-level center clipping method are as follows: yi(x)=C[zi(x)]={1zi(x)>CL0|zi(x)|CL1zi(x)<CL y_i^\prime \left( x \right) = C^\prime \left[ {{z_i}\left( x \right)} \right] = \left\{ {\matrix{ 1 & {{z_i}\left( x \right) > {C_L}} \cr 0 & {\left| {{z_i}\left( x \right)} \right| \le {C_L}} \cr { - 1} & {{z_i}\left( x \right) < - {C_L}} \cr } } \right.

That is, the output yi(x) y_i^\prime \left( x \right) of zi(x) > CL the clipper is C when 1, zi(x) < CL is − 1, and the rest is. The method of taking CL is to take the maximum amplitude of 100 samples in the front and 100 samples in the rear of zi(x). We take the smaller of these and multiply by 0.68 as the threshold level CL. We obtain the autocorrelation function of the three-level clipping output yi(x) y_i^\prime \left( x \right) according to formula (2): Ri(k)=m=1Nmyi(m)yi(m+k) {R_i}\left( k \right) = \sum\limits_{m = 1}^{N - m} {{y_i}\left( m \right)y_i^\prime \left( {m + k} \right)}

Because yi(x) y_i^\prime \left( x \right) has only three values of 1, 0, and −1. Only the addition (subtraction) method in formula (3) is used. We save a lot of time in the actual operation. This creates conditions for real-time computing.

Ordinary Differential Equation Model Design

The left and right Riemann-Liouville type fractional integral operators have semigroup properties [5]. fC ([a, b], R) function that satisfies fL1 ([a, b], R) almost everywhere on A and [a, b]. Functions that satisfy fL1 ([a, b], R) almost everywhere on fC ([a, b], R) and [a, b]. For all t ∈ [a, b], γ1, γ2 > 0 the following formulas are given [6]. aDtγ1(aDtγ2f(t))=aDtγ1γ2f(t)andtDbγ1(tDbγ2f(t))=tDbγ1γ2f(t) _aD_t^{ - {\gamma _1}}\left( {_aD_t^{ - {\gamma _2}}f\left( t \right)} \right){ = _a}D_t^{ - {\gamma _1} - {\gamma _2}}f\left( t \right)\,{\rm{and}}{\,_t}D_b^{ - {\gamma _1}}\left( {_tD_b^{ - {\gamma _2}}f\left( t \right)} \right){ = _t}D_b^{ - {\gamma _1} - {\gamma _2}}f\left( t \right)

Suppose vC0(0,t1] v \in C_0^\infty \left( {0,\,{t_1}} \right] , vC0(0,t1] v^\prime \in C_0^\infty \left( {0,\,{t_1}} \right] and for t ∈ {0} ∪ (t1, T] have v = 0 : 0=0t1(12(0Dtβu+tDt1βu)+0tf1(t,u(s))ds))v(t)dt 0 = \int_0^{{t_1}} {\left( {{1 \over 2}\left( {_0D_t^{ - \beta }u^\prime { + _t}D_{{t_1}}^{ - \beta }u^\prime } \right) + \int_0^t {{f_1}\left( {t,\,u\left( s \right)} \right)ds} )} \right)v^\prime \left( t \right)dt}

Because f1C ((0, t1] × R, R), there is ddt(120Dtβu+12tDt1βu)+f1(t,u(t))=0 {d \over {dt}}\left( {{1 \over 2}0D_t^{ - \beta }u^\prime + {1 \over 2}tD_{{t_1}}^{ - \beta }u^\prime } \right) + {f_1}\left( {t,\,u\left( t \right)} \right) = 0

We assume vC0(s1,T] v \in C_0^\infty \left( {{s_1},\,T} \right] , vC0(s1,T] v^\prime \in C_0^\infty \left( {{s_1},T} \right] , then the second fractional differential equation holds: ddt(12s1Dtβu(t)+12tDTβu(t))=f2(t,u(t)),t(s1,T] - {d \over {dt}}\left( {{1 \over 2}{s_1}D_t^{ - \beta }u^\prime \left( t \right) + {1 \over 2}tD_T^{ - \beta }u^\prime \left( t \right)} \right) = {f_2}\left( {t,\,u\left( t \right)} \right),\,t \in \left( {{s_1},T} \right]

We know from f1C ((0, t1] × R, R) and f2C ((s1, T] × R, R) that 120Dtβu+12tDt1βuC1(0,t1]+12s1Dtβu+12tDTβuC1(s1,T] {1 \over 2}0D_t^{ - \beta }u^\prime + {1 \over 2}tD_{{t_1}}^{ - \beta }u^\prime \in {C^1}\left( {0,{t_1}} \right] + {1 \over 2}{s_1}D_t^{ - \beta }u^\prime + {1 \over 2}tD_T^{ - \beta }u^\prime \in {C^1}\left( {{s_1},T} \right]

Prove that u satisfies the transient impulse condition and the non-transient impulse condition. We substitute formula (7) into formula (8) to have 0=12t1s1(t1Dtβu+tDs1βu,v)dt+12v(t1)(0Dtβu)|t=t112v(0)(tDt1βu)|t=0+12v(T)(s1Dtβu)|t=T12v(s1)(tDTβu)|t=s1++abu(0)v(0)+cdu(T)v(T)+I(u(t1))v(t1) \matrix{ {0 = {1 \over 2}\int_{{t_1}}^{{s_1}} {\left( {_{{t_1}}D_t^{ - \beta }u^\prime { + _t}D_{{s_1}}^{ - \beta }u^\prime ,v^\prime } \right)dt + {1 \over 2}v\left( {{t_1}} \right)\left( {_0D_t^{ - \beta }u^\prime } \right){{\left| {_{t = t_1^ - } - {1 \over 2}v\left( 0 \right)\left( {_tD_{{t_1}}^{ - \beta }u^\prime } \right)} \right|}_{t = 0}}} } \hfill \cr { + {1 \over 2}v\left( T \right)\left( {{s_1}D_t^{ - \beta }u^\prime } \right){{\left| {_{t = T} - {1 \over 2}v\left( {{s_1}} \right)\left( {_tD_T^{ - \beta }u^\prime } \right)} \right|}_{t = s_1^ + }} + {a \over b}u\left( 0 \right)v\left( 0 \right) + {c \over d}u\left( T \right)v\left( T \right) + I\left( {u\left( {{t_1}} \right)} \right)v\left( {{t_1}} \right)} \hfill \cr }

Suppose vC0(t1,s1] v \in C_0^\infty \left( {{t_1},{s_1}} \right] , vC0(t1,s1] v^\prime \in C_0^\infty \left( {{t_1},{s_1}} \right] , and for t ∈ [0, t1] ∪ (s1, T], v = 0. We bring v(t) into (9), we get 12t1s1(t1Dtβu+tDs1βu,v)dt=0 {1 \over 2}\int_{{t_1}}^{{s_1}} {\left( {_{{t_1}}D_t^{ - \beta }u^\prime { + _t}D_{{s_1}}^{ - \beta }u^\prime ,v^\prime } \right)dt = 0} t1Dtβu(t)+tDs1βu(t)=(t1Dtβu)|t=s1=(tDs1βu)|t=s1+=g,t(t1,s1] _{{t_1}}D_t^{ - \beta }u^\prime \left( t \right){ + _t}D_{{s_1}}^{ - \beta }u^\prime \left( t \right) = \left( {_{{t_1}}D_t^{ - \beta }u^\prime } \right){\left| {_{t = s_1^ - } = \left( {_tD_{{s_1}}^{ - \beta }u^\prime } \right)} \right|_{t = s_1^ + }} = g,t \in \left( {{t_1},{s_1}} \right]

We substitute formula (10) into formula (11) to get 0=[12g+12(0DTβu)|t=t1+I(u(t1))]v(t1)+12[g(tDTβu)|t=s1+]v(s1)+[12(tDt1βu)|t=0+abu(0)]v(0)+[12(s1Dtβu)|t=T+cdu(T)]v(T) \matrix{ {0 = \left[ { - {1 \over 2}g + {1 \over 2}\left( {_0D_T^{ - \beta }u^\prime } \right)\left| {_{t = t_1^ - } + I\left( {u\left( {{t_1}} \right)} \right)} \right.} \right]v\left( {{t_1}} \right) + {1 \over 2}\left[ {g - \left( {_tD_T^{ - \beta }u^\prime } \right)\left| {_{t = s_1^ + }} \right.} \right]v\left( {{s_1}} \right)} \hfill \cr { + \left[ { - {1 \over 2}\left( {_tD_{{t_1}}^{ - \beta }u^\prime } \right)\left| {_{t = 0} + {a \over b}u\left( 0 \right)} \right.} \right]v\left( 0 \right) + \left[ {{1 \over 2}\left( {_{{s_1}}D_t^{ - \beta }u^\prime } \right)\left| {_{t = T} + {c \over d}u\left( T \right)} \right.} \right]v\left( T \right)} \hfill \cr }

Assuming v(s1) = v(0) = v(T) = 0, this means that the transient pulse condition 12(tDs1βu)|t=t1+12(0Dtβu)|t=t1=I(u(t1)) {1 \over 2}\left( {_tD_{{s_1}}^{ - \beta }u^\prime } \right){\left| {_{t = t_1^ + } - {1 \over 2}\left( {_0D_t^{ - \beta }u^\prime } \right)} \right|_{t = t_1^ - }} = I\left( {u\left( {{t_1}} \right)} \right) holds.

Improved autocorrelation function pitch extraction algorithm

We perform endpoint detection on an audio sequence w(t). Each endpoint is denoted as S(i), (i = 1, 2, ⋯, n at the beginning of the original sequence. After accurate note segmentation, it is considered that an endpoint corresponds to the starting point of a basic note. Suppose T(i) is the pitch period of the i note in the original audio sequence. We calculate the autocorrelation function of interval s(i) = w[S(i), S(i) + l] according to a fixed-length window [7]. Where the window length l = 4096. The first ordinary differential equation corresponds to the period of the fundamental tone. From formula (3), the autocorrelation function of seg(i) can be obtained as Ri(x), corr(s(i)), (x = 1, 2, ⋯, l).

We set R(x) maximum Ri,max (x) = max(Ri(x)). Ideally, the waveform diagram of the data frame of about 70% of the notes after three-level clipping and autocorrelation function calculation conforms to the following law: the point Pi,max where Ri,max (x) is located coincides with the first peak point Pi(1) (Figure 1). The first peak point 1 and the maximum peak 3 are the same points [8]. We can obtain the pitch period T(i) = Ii (1) in this case.

Figure 1

Autocorrelation function waveforms under ideal conditions

In a few cases, the formant affects the signal, and there will be interference of frequency doubling waves [9]. This will result in the separation of Pi,max and Pi(1). We first need to pick a suitable peak point. Figure 2 shows the waveform of the data frame of the 11th note E4 of the piano piece “To Alice” after three-level clipping and autocorrelation function calculations. The second peak point 2 is the same as the largest peak point 3. Assume a threshold Hi,min = Ri,max (x) / k1. where k1 is a constant. Record the peak sequence Pi(j) satisfying the condition Ri(x) > Hi,min and the sequence numbers Ii(j), Ri(Ii(j)) = Pi(j) corresponding to Ri(x).

Figure 2

Note E4 autocorrelation function waveform

The value of k1 needs to ensure that Pi(j) does not contain the peak point 1 in Figure 2 that the amplitude is too small. The value needs to contain a large and possibly correct peak point 2. The amplitude of peak point 2 is Pi(1) and the serial number is Ii(j). The amplitude of peak point 3 is Pi,max and the serial number is Ii,max. Threshold 1 2 k1 = 2 in this paper.

The correct selection of the peak point requires further threshold judgment. We take the amplitude ratio CR = Pi,max / Pi(1) of the largest peak point to the first peak point [10]. Figure 3 shows the waveform comparison diagram of the data frame of the 35th note D5 of the piano piece “Wedding in a Dream” after three-level clipping and autocorrelation function calculation before and after the data frame is shifted.

Figure 3

Comparison of (a) before frameshift and (b) after frameshift of note D5 waveform

After calculation, the amplitude ratios in Figs. 3(a) and 3(b) can be obtained CR,1 = 1.66, CR,2 = 1.36, respectively. It can be seen that the value of CR will change when the data frame is shifted.

We can find that the amplitude ratio fluctuates within a certain range after multiple translations of the data frame (Figure 4). We get the amplitude ratio sequence CR(b) by translating the above data frame 8 times.

Figure 4

Amplitude ratio changes for multiple frameshifts

Assume that the threshold k2 is a constant. We count the number n1 of CR(b) > k2 and the number n2 of CR(b) < k2 respectively. If n2 > n1 then we assume T(i) = Ii(1). If n1 > n2 then we assume T(i) = Ii,max. k2 The value has a direct impact on the statistical results [11]. After adjustment of many songs, we take k2 = 1.43 as the ideal result.

Results and Discussion

According to the score, the tone data files used were synthesized by the software Everyone Piano. The music files are obtained from the right-hand performance part of the piano score recorded in the stereo mix. The sound source used by the software is the mad piano. Standard frequency F(i)=fa1*2n12 F\left( i \right) = f_{{a^1}}^*{2^{{n \over {12}}}} for note i. where fa1 = 440 is the first international altitude. n is the number of semitones in the interval from tone i to tone a1. n takes a negative number when the sound i is lower than the sound a1. The fundamental frequency is F′(i) = fs / T(i), where fs is the sampling frequency of the music piece [12]. Cent deviation is defined as O(i) = logk(F′(i) / F(i)). where k=21200 k = \root {1200} \of 2 . We assume the set U = {x | − 50 < x < 50}. We consider note i to be correctly identified when deviated by O(i) ∈ U in cents. Table 1Table 3 lists the traditional three-level autocorrelation function and the improved ordinary differential equation [13]. We divided all compositions and variable-speed versions into three groups of samples: slow, medium, and fast according to their rate (notes/s). We consider the notes per second of a composition as its average velocity υ. We set the υ < 3 song to be “slow.” When 3 ≤ υ < 4 the song is “medium tempo.” When υ ≥ 4 is the song is “fast.” Finally, Table 4 compares the recognition results of the slow, medium, and fast music groups.

Comparison of recognition results of slow music

Song (excerpt) N Song duration/s Song rate v/(notes/s) The method of this paper
little stars 42 30 1.4 100
Castle in the Sky 56 33 1.7 100
Faded 51 26 1.9 100
Love Romance Maps 46 21 2.2 100
(0.75× speed) to 41 19 2.2 100
alice 70 26 2.7 91.2
Canon (0.75× speed) 74 27 2.7 89
Average value 54.3 26 2.1 97.1

Comparison of recognition results of medium tempo music

Song (excerpt) N Song duration/s Song rate v/(notes/s) The correct rate of this method is R/%
Dream wedding 106 34 3.1 100
Flying Bumblebee (0.25× speed) 64 20 3.2 87.5
Faded (1.5× speed) 51 17 3.2 86.3
To Alice (1.25× speed) 125 22 3.2 80
Romance of Love (2× speed) 46 14 3.3 95.7
happy farmer 44 13 3.3 79.6
Maps 41 12 3.4 97.6
Canon 74 20 3.7 85.9
Little Star (2× speed) 42 11 3.8 85.7
Average value 65.9 18.1 3.4 88.7

Comparison of Fast Song Identification Results

Song (excerpt) N Song duration/s (n /s) Song rate v The correct rate of this method is R/%
Happy Farmer (1.25×) 44 11 4 59.1
Dream Wedding(1.25× speed) 106 25 4.2 85.8
Faded(2× speed)Maps 51 12 4.3 78.8
(1.5× speed) 41 9 4.5 65.6
Croatian Rhapsody (0.75×) 122 27 4.5 59
Canon (1.25× speed) 74 16 4.6 57.5
Flight of the Bumblebee 64 11 5.9 51.3
Croatian Rhapsody 122 20 6.1 47.5
Average value 78 16.4 4.8 63.1

Comparison of recognition results between slow and medium and fast music

Song type The accuracy of the three-level center clipping method is R1 The correct rate of this method is R2 Absolute Error Rate |R1–R2|
Slow 93.3 97.1 3.8
Medium speed 74.6 88.7 14.1
fast 42.9 63.1 20.2
Average value 70.3 83 12.7

The last column in Tables 1–3 is the relative error between the algorithm in this paper and the three-level clipping method. It can be seen from Table 1 that the relative error rate of the two methods is only within 5.1% when the rhythm of the music is slow [14]. This shows that the traditional three-level clipping method is close to the recognition rate of this method. But it can be seen from Table 2 that when the rhythm of the music is faster, the average relative error rate of the two methods is 20.6%. The accuracy of the improved algorithm is higher than that of the traditional algorithm. It can be seen from Table 3 that the relative error rates of the two methods are larger when the music tempo is further accelerated. Although the recognition rate of this algorithm is reduced under fast-paced conditions, the recognition rate is still significantly higher than that of traditional algorithms.

The two methods have differences in the recognition results of different songs simultaneously. As shown in Table 1, the music “To Alice” and “Canon (0.75 times speed)” with 2.7 notes per second. The recognition accuracy of the two methods is not the same. Music with a relatively fast local rhythm, “Canon (0.75 times),” has a lower recognition accuracy. Even some slow songs have worse recognition results than fast ones. As shown in Table 3, the number of notes per second of “Wedding in a Dream (1.25 times)” is 4.2. The accuracy rates of the two methods are 68.8% and 85.8%, respectively. The “Happy Farmer” in Table 2 has 3.3 notes per second. But the recognition rates of the two methods are only 53.1% and 79.6%. This may be due to the uneven rhythm of the music itself.

Conclusion

In this paper, an improved algorithm of pitch period extraction for ordinary differential equations is proposed. When the rhythm of the piano music is fast, the average accuracy of the algorithm in this paper is 63.1%. When the rhythm of the piano music is moderate, the average accuracy of the algorithm in this paper is 88.7%. The text algorithm has an average accuracy of 97.1% when the rhythm of the music is slow. The algorithm used in this paper has an average recognition accuracy of 83.0% for the above three groups of the slow, medium, and fast music. Therefore, the algorithm in this paper has achieved high recognition accuracy in recognizing piano sounds with different fast and slow rhythms. And the algorithm model has a significant improvement in the recognition accuracy of fast-paced piano tones.

Figure 1

Autocorrelation function waveforms under ideal conditions
Autocorrelation function waveforms under ideal conditions

Figure 2

Note E4 autocorrelation function waveform
Note E4 autocorrelation function waveform

Figure 3

Comparison of (a) before frameshift and (b) after frameshift of note D5 waveform
Comparison of (a) before frameshift and (b) after frameshift of note D5 waveform

Figure 4

Amplitude ratio changes for multiple frameshifts
Amplitude ratio changes for multiple frameshifts

Comparison of recognition results of medium tempo music

Song (excerpt) N Song duration/s Song rate v/(notes/s) The correct rate of this method is R/%
Dream wedding 106 34 3.1 100
Flying Bumblebee (0.25× speed) 64 20 3.2 87.5
Faded (1.5× speed) 51 17 3.2 86.3
To Alice (1.25× speed) 125 22 3.2 80
Romance of Love (2× speed) 46 14 3.3 95.7
happy farmer 44 13 3.3 79.6
Maps 41 12 3.4 97.6
Canon 74 20 3.7 85.9
Little Star (2× speed) 42 11 3.8 85.7
Average value 65.9 18.1 3.4 88.7

Comparison of recognition results between slow and medium and fast music

Song type The accuracy of the three-level center clipping method is R1 The correct rate of this method is R2 Absolute Error Rate |R1–R2|
Slow 93.3 97.1 3.8
Medium speed 74.6 88.7 14.1
fast 42.9 63.1 20.2
Average value 70.3 83 12.7

Comparison of recognition results of slow music

Song (excerpt) N Song duration/s Song rate v/(notes/s) The method of this paper
little stars 42 30 1.4 100
Castle in the Sky 56 33 1.7 100
Faded 51 26 1.9 100
Love Romance Maps 46 21 2.2 100
(0.75× speed) to 41 19 2.2 100
alice 70 26 2.7 91.2
Canon (0.75× speed) 74 27 2.7 89
Average value 54.3 26 2.1 97.1

Comparison of Fast Song Identification Results

Song (excerpt) N Song duration/s (n /s) Song rate v The correct rate of this method is R/%
Happy Farmer (1.25×) 44 11 4 59.1
Dream Wedding(1.25× speed) 106 25 4.2 85.8
Faded(2× speed)Maps 51 12 4.3 78.8
(1.5× speed) 41 9 4.5 65.6
Croatian Rhapsody (0.75×) 122 27 4.5 59
Canon (1.25× speed) 74 16 4.6 57.5
Flight of the Bumblebee 64 11 5.9 51.3
Croatian Rhapsody 122 20 6.1 47.5
Average value 78 16.4 4.8 63.1

Gabrielli, L., Cantarini, M., Castellini, P., & Squartini, S. The Rhodes electric piano: Analysis and simulation of the inharmonic overtones. The Journal of the Acoustical Society of America., 2020; 148(5):3052–3064 GabrielliL. CantariniM. CastelliniP. SquartiniS. The Rhodes electric piano: Analysis and simulation of the inharmonic overtones The Journal of the Acoustical Society of America 2020 148 5 3052 3064 10.1121/10.000200233261386 Search in Google Scholar

Zhang, D. Application of audio visual tuning detection software in piano tuning teaching. International Journal of Speech Technology., 2019; 22(1):251–257 ZhangD. Application of audio visual tuning detection software in piano tuning teaching International Journal of Speech Technology 2019 22 1 251 257 10.1007/s10772-019-09599-5 Search in Google Scholar

Gruhn, W., Ristmägi, R., Schneider, P., D'Souza, A., & Kiilu, K. How stable is pitch labeling accuracy in absolute pitch possessors?. Empirical Musicology Review., 2019; 13(3–4):110–123 GruhnW. RistmägiR. SchneiderP. D'SouzaA. KiiluK. How stable is pitch labeling accuracy in absolute pitch possessors? Empirical Musicology Review 2019 13 3–4 110 123 10.18061/emr.v13i3-4.6637 Search in Google Scholar

Gençoğlu, M. & Agarwal, P. Use of Quantum Differential Equations in Sonic Processes. Applied Mathematics and Nonlinear Sciences., 2021; 6(1): 21–28 GençoğluM. AgarwalP. Use of Quantum Differential Equations in Sonic Processes Applied Mathematics and Nonlinear Sciences 2021 6 1 21 28 10.2478/amns.2020.2.00003 Search in Google Scholar

Vanli, A., Ünal, I. & Özdemir, D. Normal complex contact metric manifolds admitting a semi symmetric metric connection. Applied Mathematics and Nonlinear Sciences., 2020; 5(2): 49–66 VanliA. ÜnalI. ÖzdemirD. Normal complex contact metric manifolds admitting a semi symmetric metric connection Applied Mathematics and Nonlinear Sciences 2020 5 2 49 66 10.2478/amns.2020.2.00013 Search in Google Scholar

Nichols, B. E. Effect of vocal versus piano doubling on children's singing accuracy. Psychology of Music., 2021;49(5):1415–1423 NicholsB. E. Effect of vocal versus piano doubling on children's singing accuracy Psychology of Music 2021 49 5 1415 1423 10.1177/0305735620936757 Search in Google Scholar

Friedman, R. S., Kowalewski, D. A., Vuvan, D. T., & Neill, W. T. Consonance preferences within an unconventional tuning system. Music Perception: An Interdisciplinary Journal., 2021; 38(3):313–330 FriedmanR. S. KowalewskiD. A. VuvanD. T. NeillW. T. Consonance preferences within an unconventional tuning system Music Perception: An Interdisciplinary Journal 2021 38 3 313 330 10.1525/mp.2021.38.3.313 Search in Google Scholar

Reis, K. S., Heald, S. L., Veillette, J. P., Van Hedger, S. C., & Nusbaum, H. C. Individual differences in human frequency-following response predict pitch labeling ability. Scientific Reports., 2021; 11(1):1–10 ReisK. S. HealdS. L. VeilletteJ. P. Van HedgerS. C. NusbaumH. C. Individual differences in human frequency-following response predict pitch labeling ability Scientific Reports 2021 11 1 1 10 10.1038/s41598-021-93312-7827566434253760 Search in Google Scholar

Nápoles, J., Springer, D. G., Silvey, B. A., & Adams, K. Effects of pitch source on pitch-matching and intonation accuracy of collegiate singers. Journal of Research in Music Education., 2019; 67(3):270–285 NápolesJ. SpringerD. G. SilveyB. A. AdamsK. Effects of pitch source on pitch-matching and intonation accuracy of collegiate singers Journal of Research in Music Education 2019 67 3 270 285 10.1177/0022429419863034 Search in Google Scholar

Larrouy-Maestri, P., Harrison, P. M., & Müllensiefen, D. The mistuning perception test: A new measurement instrument. Behavior Research Methods., 2019; 51(2):663–675 Larrouy-MaestriP. HarrisonP. M. MüllensiefenD. The mistuning perception test: A new measurement instrument Behavior Research Methods 2019 51 2 663 675 10.3758/s13428-019-01225-1647863630924106 Search in Google Scholar

Lahdelma, I., & Eerola, T. Cultural familiarity and musical expertise impact the pleasantness of consonance/dissonance but not its perceived tension. Scientific reports., 2020; 10(1):1–11 LahdelmaI. EerolaT. Cultural familiarity and musical expertise impact the pleasantness of consonance/dissonance but not its perceived tension Scientific reports 2020 10 1 1 11 10.1038/s41598-020-65615-8725082932457382 Search in Google Scholar

Xu, W., Fang, X., Han, J., Wu, Z., & Zhang, J. Effect of coating thickness on sound absorption property of four wood species commonly used for piano soundboards. Wood and Fiber Science., 2020; 52(1):28–43 XuW. FangX. HanJ. WuZ. ZhangJ. Effect of coating thickness on sound absorption property of four wood species commonly used for piano soundboards Wood and Fiber Science 2020 52 1 28 43 10.22382/wfs-2020-004 Search in Google Scholar

Chernyavska, M., & ZHANG, M. Preludes and fugues for piano in the polyphonic works of Chinese composers. Rast Müzikoloji Dergisi., 2019; 9(3):2943–2960 ChernyavskaM. ZHANGM. Preludes and fugues for piano in the polyphonic works of Chinese composers Rast Müzikoloji Dergisi 2019 9 3 2943 2960 10.12975/rastmd.2021931 Search in Google Scholar

Parncutt, R. Pitch-class prevalence in plainchant, scale-degree consonance, and the origin of the rising leading tone. Journal of New Music Research., 2019; 48(5):434–448 ParncuttR. Pitch-class prevalence in plainchant, scale-degree consonance, and the origin of the rising leading tone Journal of New Music Research 2019 48 5 434 448 10.1080/09298215.2019.1642360 Search in Google Scholar

Articoli consigliati da Trend MD

Pianifica la tua conferenza remota con Sciendo