1. bookAHEAD OF PRINT
Informacje o czasopiśmie
License
Format
Czasopismo
eISSN
2444-8656
Pierwsze wydanie
01 Jan 2016
Częstotliwość wydawania
2 razy w roku
Języki
Angielski
Otwarty dostęp

Vocal Music Teaching Model Based on Finite Element Differential Mathematical Equations

Data publikacji: 15 Jul 2022
Tom & Zeszyt: AHEAD OF PRINT
Zakres stron: -
Otrzymano: 12 Mar 2022
Przyjęty: 13 May 2022
Informacje o czasopiśmie
License
Format
Czasopismo
eISSN
2444-8656
Pierwsze wydanie
01 Jan 2016
Częstotliwość wydawania
2 razy w roku
Języki
Angielski
Introduction

In recent years, multimodal audio and video signal processing has been a research hotspot in speech recognition research. The fusion of speech auditory and visual speech information during speaking can improve speech signals' recognition rate and robustness in noisy environments. However, the experimental results of speech recognition show that although the visual movement of the lips is related to the speech signal, it is not synchronized [1]. Lip visual motion precedes speech signal by about 120ms on average. A complete audio-visual speech recognition system should reflect this fact as accurately as possible.

There are two main methods for extracting visual speech features: video image pixel and mouth contour feature extraction. The former directly performs some transformation on the image area of the mouth. The algorithm can then extract the transform domain parameters as features [2]. The advantage is that it can retain information such as teeth and tongue when speaking. But the algorithm is susceptible to visual image background and different speakers. The latter extracts the contour features of the mouth shape when speaking.

The algorithm is not easily affected by the experience of the visual image. At present, the optical elements based on the mouth contour mainly directly use the coordinates of the mouth contour feature points or the geometric features (perimeter, height, width, etc.) of the mouth contour points. Some scholars use genetic algorithms to obtain contour coordinate points and then perform discrete Fourier transform on the coordinate points and the contour coordinate points of adjacent frames. Then we can get the dynamic characteristics of mouth shape. However, the genetic algorithm only uses larger eigenvalues and corresponding eigenvectors in model reconstruction and does not consider the residual part. Significantly when the lips change drastically, the algorithm is not very accurate for the calibration of the mouth contour points. Therefore, if we directly use the coordinate points of the contour to perform discrete Fourier transform, the visual features obtained are not robust.

This paper adopts the Bayesian Tangent Shape Model (BTSM) algorithm. This paper obtains a more accurate mouth contour by establishing a distribution model for the residual part of the principal component analysis [3]. Then we extract the geometric features of lips such as the width, height, and angle of lips as the geometric features of visual speech according to the contour feature points. Finally, we transform these visual geometric features using discrete Fourier transforms. This eliminates the redundancy of simple geometric elements and reduces the feature dimension. At this time, we can obtain more robust lip-dynamic speech features.

Audio and video fusion methods include feature fusion, model fusion, and decision fusion. Model fusion has been proven to be a better fusion method. Some scholars have conducted analysis and identification experiments on the Multi-Stream Hidden Markov Model (MSHMM) and its various extended models [4]. These models describe, to some extent, the asynchrony between audio and video streams. However, these models can only model synchronous and asynchronous relations at the phoneme level in the continuous speech recognition of audio and video with large and medium vocabularies. However, in many cases, the asynchronous degree of audio and video data flow has exceeded the boundary of the phoneme. Describing the asynchrony of audio and video streams over more extended periods should yield better recognition results.

Because HMM has certain limitations on complex multimodal speech recognition expression ability, more related research has been opened in the academic community. Some scholars use single-stream and multi-stream dynamic Bayesian network (DBN) models to conduct continuous speech recognition research. HMM is a particular case of DBN, and DBN is a generalization of HMM. Compared with HMM, the DBN model has better scalability and interpretability [5]. Some scholars have established a multi-stream DBN model. The algorithm forces the audio and video streams to synchronize through word transition probabilities at word nodes. But this model is only applied to speech recognition of multiple audio features. Some scholars have extended this model and used it for audio and video speech recognition. The study found an improvement in recognition rates. Some scholars have found that the influence of audio and video streams on word transition probability nodes is very controversial. The state transition probabilities of video streams are simultaneously affected by the corresponding nodes in both streams. This limits the description of the asynchrony of audio and video streams to some extent.

This paper introduces a multi-stream asynchronous DBN model. We call it the MS-ADBN model for short, in which the audio and video streams are synchronized at word nodes. The transition probability of the word node is jointly determined by the nodes of the audio stream and the video stream [6]. This model truly reflects the asynchrony of audio and video streams at the word level. We use the MS-ADBN model to conduct joint speech recognition experiments on audio and video streams. At the same time, the model is compared with the recognition results of the MSHMM model under the same experimental conditions. We got better results here.

Mouth contour extraction based on BTSM algorithm

Visual features based on lip contours have been a better visual speech feature. Since the mouth is a flexible target, we use the traditional genetic algorithm model to analyze the labeled lip training data. At this point, we build a statistical parametric model. Then we use this model to match the new lip image to get the mouth contour. However, the genetic algorithm only uses larger shape space eigenvalues to describe the model, which does not consider the reconstruction error. The BTSM model establishes a distribution model to describe the reconstruction error using principal component analysis (PCA) to describe the model [7]. The algorithm can tell the mouth contour more accurately than the genetic algorithm model. At the same time, we use the EM (Expectation-maximization) algorithm to train and estimate the matching parameters of the model. The main steps are as follows.

First, we manually annotate the training samples of mouth images. And we normalize it to the same reference frame.

Principal component analysis and prior shape model. We use PCA to analyze the mean μ and covariance matrix Var of the training samples to calculate the eigenvalues and eigenvectors ϕ of Var. We arrange the eigenvalues in descending order and keep the eigenvector ϕt = (ψ1, ψ1, ⋯, ψt) corresponding to the first t eigenvalues. The residual part is represented by ε. Then any new mouth shape sample can be described by the following formula x=μ+ϕb+ϕε x = \mu + \phi b + \phi \varepsilon

Where μ is the mean. b is the t dimensional model shape parameter. While ε and b are independent, its distribution can be described by the following model p(ε)~{exp(ε2/2σ2)},σ2=12L4i=t+12l4λi p(\varepsilon)\sim\{\exp (- {\left\| \varepsilon \right\|^2}/2{\sigma ^2})\},{\sigma ^2} = {1 \over {2L - 4}}\sum\limits_{i = t + 1}^{2l - 4} {{\lambda _i}}

Where L is the number of samples in the training set. ||ε|| is the Euclidean norm. λi is the eigenvalue of Var. The definition of the parameter b is the same as that of the genetic algorithm bi<3λ¯i {b_i}{< ^3}{\bar \lambda _i} .

Establish a local point, morphological model. The algorithm performs grayscale sampling in the average direction of each contour feature point based on the training samples. We build the local contour point model. We sample m points on each side of the everyday aspect of this point during a local search for the loop of the new image's mouth contour search. At this point, we look for the best matching point.

Search the new image mouth contour.

We compute the model parameters within a Bayesian framework for the image y to be searched. This is equivalent to a maximum a posteriori (MAP) problem. We employ an EM-based parameter estimation algorithm to calculate various transformation parameters based on known tangent shape x and observed shape vector y. This article starts with an average lip contour point. In each loop, we first use a local statistical model to find the current optimal shape estimate y. We then use the statistical shape model as a constraint in the tangent space until convergence. Finally, this paper can get the mouth-shaped contour feature point sequence (x1, ⋯ xN, y1, ⋯, yN).

Visual speech dynamic feature extraction based on discrete Fourier transform

In this paper, a total of 20 points (shown in Figure 1) of the lips' inner and outer contour features obtained by the BTSM algorithm are used to represent the mouth shape. In this paper, the geometric features of the mouth shape are extracted directly according to the contour feature points [8]. The content includes 20-dimensional features of the opening degree (8 horizontal and eight vertical distances) and the angle of opening (α, β, θ and •) of the lips. We normalize the above eigenvalues to 14 to the normalized mouth geometry feature (c1, c2, ⋯, cN).

Figure 1

Schematic diagram of visual features of lips

Although this paper uses BTSM to obtain more accurate contour feature points, the recognition effect is general when we directly use static geometric features as visual speech features. It is more important for speech signals to reflect the dynamic characteristics of mouth shape changes. The dimension is relatively large if we take differential features for geometric components [9]. The existence of information redundancy between features increases the intermediate parameters and training parameters required by the model. In this paper, the method of the document is used to perform discrete Fourier transform on multi-frame features to obtain the dynamic characteristics of mouth shape based on Visemic discrete Fourier. Later, this paper directly performs discrete Fourier transform on the contour coordinate points of multiple frames. However, the mouth shape coordinates obtained by the parametric model are not very accurate, significantly when the lips change drastically. In this paper, discrete Fourier transform is performed based on geometric features such as the mouth shape's height, width, and angle. In this way, we can obtain more stable visual speech dynamic features.

Dynamic geometry of the mouth

The voice data sampling rate is 100Hz, and the corresponding video data sampling rate is 25Hz. We first perform linear interpolation to make the audio and video feature sampling rates consistent [10]. The article reflects the dynamic information of visual geometric features. This paper takes the feature information of 5 frames before and after this data frame and connects them in sequence. This is equivalent to moving an observation window with a length of 11 frames in the lip-shaped geometric feature sequence to obtain multi-frame connected lip-shaped features. It will be the input to the discrete Fourier transform. xt=(c1tt,c2t5,,cNt5,,c1t,,cNt,,c1t+5,,cNt+5 {x_t} = \left({c_1^{t - t},c_2^{t - 5}, \cdots,c_N^{t - 5}, \cdots,c_1^t, \cdots,c_N^t, \cdots,c_1^{t + 5}, \cdots,c_N^{t + 5}} \right.

Determination of discrete Fourier output categories

The number of classifications needs to be determined before the discrete Fourier transform is performed. Since a viseme is the basic speech unit in the visual domain, there is a particular mapping relationship between phonemes and visemes. Phonemes can be clustered into several graphic viseme classes in the visual environment [11]. At this point, we can think that vise retains the basic discriminative information of speech-visual features. In this paper, the number of categories of visemes is regarded as the number of classes of discrete Fourier. The digital audio and video database of this paper have 13 viseme units.

Discrete Fourier Training and Visual Feature Extraction

Earlier, we determined the input and output dimensions of the discrete Fourier. The input is 220 dimensions, and the output is 13 dimensions. We first need to train the discrete Fourier. The input training data is xl: l = 1, ⋯, L for a known C gram classification problem. And the class to which each training sample belongs is known. Then the purpose of xlci, i=1, 2, ⋯, C, LDA transformation is to find a transformation matrix A. it satisfies A¯=arg maxAdet(ATSBA)det(ATSwA) \bar A = \mathop {\arg \max}\limits_A {{\det \left({{A^T}{S_B}A} \right)} \over {\det \left({{A^T}{S_w}A} \right)}}

Where SB and Sw are the inter-class dispersion matrix and the intra-class dispersion matrix of the training data, respectively. It can be expressed as follows SB=cCPr(c)(μcμ)(μcμ)T,SW=cCPr(c)σc {S_B} = \sum\limits_{c \in C} {\Pr (c)\left({{\mu _c} - \mu} \right){{\left({{\mu _c} - \mu} \right)}^T},{S_W} = \sum\limits_{c \in C} {\Pr (c){\sigma _c}}}

Pr(c) is the prior probability. μc and σc are the mean and covariance of the training data for the class c. μ is the mean of all training data. Equation (4) may have the singularity problem of the intra-class dispersion matrix in the case of small samples. Therefore, this paper adopts the direct discrete Fourier algorithm to solve the problem. This paper avoids the pain that SW may be singular by triangulating SB and SW. Finally, we get the projection matrix Alda. Transforming the input vector X, LDA into Xlda = AldaX, Xlda is the obtained visual speech feature. Benni maps the mouth-shaped geometric features of multiple frames into 13-dimensional elements based on viseme classification. This reduces feature dimensionality and eliminates feature redundancy. The algorithm can preserve the maximum discriminative information of the visual speech.

Experiments and Results Analysis
Audio and video database and audio and video feature extraction

The audio and video database adopts the English digital continuous voice audio and video database jointly recorded by the laboratory. The database was recorded in a quiet environment. The video image is a frontal head video image. The database contains 11 words with numbers 0-10. This involves a total of 22 phonemes. The database scripts are recorded in sentence order of the Aurora 2.0 database. This paper uses 100 sentences of pure audio and video data as the training set. Another 50 sentences and the corresponding audio and video data of the noise-added speech are used as the test set.

The frame rate of audio data is 100 frames/s. We adopt HTK to extract 13-dimensional MFCC features and energy features of audio data. We obtain 13-dimensional visual features from the video data according to the BTSM mouth contour feature extraction algorithm and discrete Fourier projection [12]. At this point, we call it discrete Fourier features. We used the 20-dimensional static geometric features of the mouth shape as a comparison.

Experimental results and analysis

There are 22 phonemes in both audio and video streams in the experiment. At the same time, we add mute and pause, a total of 50 Gaussian model parameters need to be trained. The training set has more than 60 samples per word. The above model parameters can be appropriately trained. This article uses the single-stream DBN model, conventional HMM, and multi-stream asynchronous HMM (MSHMM) for comparative simulation [13]. The author conducts speech recognition experiments using MFCC-D-A audio features, discrete Fourier visual features, and geometric visual features. For the multi-stream DBN (MS-ADBN) model and the multi-stream HMM, the authors also conducted speech recognition experiments under the same experimental conditions using a combination of MFCC-D-A audio features and discrete Fourier or geometric visual features, respectively. The results are shown in Table 1.

Comparison of word recognition rates under various HMM and DBN models

System SNR/%
0dB 5dB 10dB 15dB
SS-DBN (MFCC-D-A) 42.94 66.1 71.75 77.97
HMM (MFCC-D-A) 27.55 42.76 62.67 74.62
SS-DBN (Static Geometric Features) 54.58 54.58 54.58 54.58
SS-DBN (Discrete Fourier) 58.67 58.67 58.67 58.67
HMM (Static Geometric Features) 46.54 46.54 46.54 46.54
HMM (Discrete Fourier) 54.21 54.21 54.21 54.21
MSHMM (MFCC-D-A and static geometry features) 29.28 49.72 62.84 77.97
MSHMM (MFCC-D-A and Discrete Fourier) 25.76 55.76 70 82.84
MS-ADBN (MFCC-D-A and static geometric features) 41.18 56.21 72.2 82.66
MS-ADBN (MFCC-D-A and Discrete Fourier) 42.79 67.86 75.82 82.01
System SNR/%
20dB 30dB Clean 0-15dB
SS-DBN (MFCC-D-A) 82.46 96.62 97.74 64.69
HMM (MFCC-D-A) 85.67 98.2 98.44 52.88
SS-DBN (Static Geometric Features) 54.58 54.58 54.58 54.58
SS-DBN (Discrete Fourier) 58.67 58.67 58.67 58.67
HMM (Static Geometric Features) 46.54 46.54 46.54 46.54
HMM (Discrete Fourier) 54.42 54.42 54.42 54.42
MSHMM (MFCC-D-A and static geometry features) 84.75 92.42 94.24 55.24
MSHMM (MFCC-D-A and Discrete Fourier) 88.28 92.66 96.05 62.09
MS-ADBN (MFCC-D-A and static geometric features) 88.89 94.22 96.08 64.56
MS-ADBN (MFCC-D-A and Discrete Fourier) 89.72 94.56 96.74 67.62

When the signal-to-noise ratio of the speech signal is relatively high, the recognition rate of the audio feature data of the DBN model is slightly lower than that based on the HMM. The most likely reason is that this DBN model uses a monotone model, while the HMM uses a triphone model. When the signal-to-noise ratio is low, especially when it is less than or equal to 15dB, the recognition rate of the DBN model is 12.81% higher than that of HMM on average. It has better robustness to noise.

Regardless of HMM or SSDBN model, the recognition rate of discrete Fourier features is 7.71% and 4.09% higher than that of geometric elements, respectively. This shows that discrete Fourier features are more robust. When discrete Fourier video features are used in this paper, the recognition rate of the SS-DBN model is 4.36% higher than that of HMM.

When the signal-to-noise ratio is low, the performance of the multi-stream model is better than the recognition rate of the corresponding single-stream model due to the combination of video and speech features [14]. The recognition performance of the multi-stream model that fuses audio elements and discrete Fourier features is better than that of the multi-stream model that connects audio features and geometric features. The MSHMM and MS-ADBN models, which fuse audio features and discrete Fourier features, improve performance by 9.21% and 2.93%, respectively, over the corresponding single-stream models. This shows that the discrete Fourier visual speech features used are relatively robust.

The performance of the MS-ADBN model is significantly better than that of MSHMM. Because the MS-ADBN model describes the synchronous and asynchronous relationship of the audio and video streams within the word, the MSHMM model restricts the audio and video to be synchronized at the phoneme nodes. The average recognition rate of the MSADBN model is 4.92% and 6.53% higher than MSHMM in the test environment with a signal-to-noise ratio of 0–30dB.

The highest recognition rate of the multi-stream model in Table 1 under pure speech is only 96.73%. This is because the training data is not yet sufficient. The flow index problem involving complex audio and video streams is not discussed in more detail in this paper. Model details require more in-depth study.

Conclusion

This paper constructs a multi-stream asynchronous dynamic Bayesian network (MS-ADBN) model. The word transition probabilities of this model are jointly determined by the audio stream and the video stream. It reflects the synchronous and asynchronous nature of audio and video streams at the word level. At the same time, we use the Bayesian Tangent Shape Model (BTSM) algorithm to extract the mouth contour and introduce a novel visual feature. The model performs discrete Fourier transform on multi-frame mouth geometric features to obtain a robust discrete Fourier visual speech dynamic quality. We adopt the mouth geometry feature and discrete Fourier feature, and the average recognition rate (0~30dB) of the MS-ADBN model is 6.53% and 4.92% higher than MSHMM, respectively. This is because MSHMM enforces audio and video synchronization at the phoneme level, while the MS-DBN model relaxes this asynchrony limit to the word level. For the single-stream SS-DBN model, the speech recognition rate using discrete Fourier visual speech features is higher than that using mouth geometry features. This indicates that the extracted discrete Fourier visual features are more robust.

Figure 1

Schematic diagram of visual features of lips
Schematic diagram of visual features of lips

Comparison of word recognition rates under various HMM and DBN models

System SNR/%
0dB 5dB 10dB 15dB
SS-DBN (MFCC-D-A) 42.94 66.1 71.75 77.97
HMM (MFCC-D-A) 27.55 42.76 62.67 74.62
SS-DBN (Static Geometric Features) 54.58 54.58 54.58 54.58
SS-DBN (Discrete Fourier) 58.67 58.67 58.67 58.67
HMM (Static Geometric Features) 46.54 46.54 46.54 46.54
HMM (Discrete Fourier) 54.21 54.21 54.21 54.21
MSHMM (MFCC-D-A and static geometry features) 29.28 49.72 62.84 77.97
MSHMM (MFCC-D-A and Discrete Fourier) 25.76 55.76 70 82.84
MS-ADBN (MFCC-D-A and static geometric features) 41.18 56.21 72.2 82.66
MS-ADBN (MFCC-D-A and Discrete Fourier) 42.79 67.86 75.82 82.01
System SNR/%
20dB 30dB Clean 0-15dB
SS-DBN (MFCC-D-A) 82.46 96.62 97.74 64.69
HMM (MFCC-D-A) 85.67 98.2 98.44 52.88
SS-DBN (Static Geometric Features) 54.58 54.58 54.58 54.58
SS-DBN (Discrete Fourier) 58.67 58.67 58.67 58.67
HMM (Static Geometric Features) 46.54 46.54 46.54 46.54
HMM (Discrete Fourier) 54.42 54.42 54.42 54.42
MSHMM (MFCC-D-A and static geometry features) 84.75 92.42 94.24 55.24
MSHMM (MFCC-D-A and Discrete Fourier) 88.28 92.66 96.05 62.09
MS-ADBN (MFCC-D-A and static geometric features) 88.89 94.22 96.08 64.56
MS-ADBN (MFCC-D-A and Discrete Fourier) 89.72 94.56 96.74 67.62

Yoo, H. Cultural Humility and Intercultural Contexts of Music Education. Music Educators Journal., 2021; 108(2): 36–43 YooH. Cultural Humility and Intercultural Contexts of Music Education Music Educators Journal 2021 108 2 36 43 10.1177/00274321211060046 Search in Google Scholar

Vampola, T., Horáček, J., & Laukkanen, A. M. Finite element modeling of the effects of velopharyngeal opening on vocal tract reactance in female voice. The Journal of the Acoustical Society of America., 2021; 150(3): 2154–2162 VampolaT. HoráčekJ. LaukkanenA. M. Finite element modeling of the effects of velopharyngeal opening on vocal tract reactance in female voice The Journal of the Acoustical Society of America 2021 150 3 2154 2162 10.1121/10.0006370 Search in Google Scholar

Ravignani, A., Dalla Bella, S., Falk, S., Kello, C. T., Noriega, F., & Kotz, S. A. Rhythm in speech and animal vocalizations: a cross-species perspective. Annals of the New York Academy of Sciences., 2019; 1453(1): 79–98 RavignaniA. Dalla BellaS. FalkS. KelloC. T. NoriegaF. KotzS. A. Rhythm in speech and animal vocalizations: a cross-species perspective Annals of the New York Academy of Sciences 2019 1453 1 79 98 10.1111/nyas.14166 Search in Google Scholar

Liu, B., Polce, E., & Jiang, J. An objective parameter to classify voice signals based on variation in energy distribution. Journal of Voice., 2019; 33(5): 591–602 LiuB. PolceE. JiangJ. An objective parameter to classify voice signals based on variation in energy distribution Journal of Voice 2019 33 5 591 602 10.1016/j.jvoice.2018.02.011 Search in Google Scholar

Confredo, D. A., & Brittin, R. V. EFFECTS OF SINGLE VERSUS MULTIPLE STAFF MUSIC NOTATION ON WIND CHAMBER GROUP PERFORMANCE OUTCOMES AND REHEARSAL PROCEDURES. Journal of Band Research., 2019; 55(1): 49–76 ConfredoD. A. BrittinR. V. EFFECTS OF SINGLE VERSUS MULTIPLE STAFF MUSIC NOTATION ON WIND CHAMBER GROUP PERFORMANCE OUTCOMES AND REHEARSAL PROCEDURES Journal of Band Research 2019 55 1 49 76 Search in Google Scholar

Duan, Y. Y., Liu, P. R., Huo, T. T., Liu, S. X., Ye, S., & Ye, Z. W. Application and Development of Intelligent Medicine in Traditional Chinese Medicine. Current medical science., 2021; 41(6): 1116–1122 DuanY. Y. LiuP. R. HuoT. T. LiuS. X. YeS. YeZ. W. Application and Development of Intelligent Medicine in Traditional Chinese Medicine Current medical science 2021 41 6 1116 1122 10.1007/s11596-021-2483-2 Search in Google Scholar

Smith, S. L., Maxfield, L., & Hunter, E. J. Sensitivity analysis of muscle mechanics-based voice simulator to determine gender-specific speech characteristics. Biomechanics and modeling in mechanobiology., 2019; 18(2): 453–462 SmithS. L. MaxfieldL. HunterE. J. Sensitivity analysis of muscle mechanics-based voice simulator to determine gender-specific speech characteristics Biomechanics and modeling in mechanobiology 2019 18 2 453 462 10.1007/s10237-018-1095-7 Search in Google Scholar

Alfian, A., & Khristianto, K. Interpersonal Meaning Complementarity between Song Lyrics and Its Cover. Jurnal Ilmiah Wahana Pendidikan., 2022; 8(1): 9–18 AlfianA. KhristiantoK. Interpersonal Meaning Complementarity between Song Lyrics and Its Cover Jurnal Ilmiah Wahana Pendidikan 2022 8 1 9 18 Search in Google Scholar

Matney, W. Music therapy as multiplicity: Implications for music therapy philosophy and theory. Nordic Journal of Music Therapy., 2021;30(1): 3–23 MatneyW. Music therapy as multiplicity: Implications for music therapy philosophy and theory Nordic Journal of Music Therapy 2021 30 1 3 23 10.1080/08098131.2020.1811371 Search in Google Scholar

Jabber, K. W., & Mahmood, A. A. Non-verbal communication between two non-native English speakers: Iraqi and Chinese. Theory and Practice in Language Studies., 2020; 10(2): 189–196 JabberK. W. MahmoodA. A. Non-verbal communication between two non-native English speakers: Iraqi and Chinese Theory and Practice in Language Studies 2020 10 2 189 196 10.17507/tpls.1002.06 Search in Google Scholar

Günerhan, H., & Çelik, E. Analytical and approximate solutions of fractional partial differential-algebraic equations. Applied Mathematics and Nonlinear Sciences., 2020; 5(1): 109–120 GünerhanH. ÇelikE. Analytical and approximate solutions of fractional partial differential-algebraic equations Applied Mathematics and Nonlinear Sciences 2020 5 1 109 120 10.2478/amns.2020.1.00011 Search in Google Scholar

Durur, H., & Yokuş, A. Exact solutions of (2+ 1)-Ablowitz-Kaup-Newell-Segur equation. Applied Mathematics and Nonlinear Sciences., 2021; 6(2): 381–386 DururH. YokuşA. Exact solutions of (2+ 1)-Ablowitz-Kaup-Newell-Segur equation Applied Mathematics and Nonlinear Sciences 2021 6 2 381 386 10.2478/amns.2020.2.00074 Search in Google Scholar

Aghili, A. Complete solution for the time fractional diffusion problem with mixed boundary conditions by operational method. Applied Mathematics and Nonlinear Sciences., 2021; 6(1): 9–20 AghiliA. Complete solution for the time fractional diffusion problem with mixed boundary conditions by operational method Applied Mathematics and Nonlinear Sciences 2021 6 1 9 20 10.2478/amns.2020.2.00002 Search in Google Scholar

Müller, M. An educational guide through the FMP notebooks for teaching and learning fundamentals of music processing. Signals., 2021; 2(2): 245–285 MüllerM. An educational guide through the FMP notebooks for teaching and learning fundamentals of music processing Signals 2021 2 2 245 285 10.3390/signals2020018 Search in Google Scholar

Polecane artykuły z Trend MD

Zaplanuj zdalną konferencję ze Sciendo