1. bookAHEAD OF PRINT
Journal Details
License
Format
Journal
eISSN
2444-8656
First Published
01 Jan 2016
Publication timeframe
2 times per year
Languages
English
Open Access

Design of language assisted learning model and online learning system under the background of artificial intelligence

Published Online: 10 Nov 2022
Volume & Issue: AHEAD OF PRINT
Page range: -
Received: 24 Apr 2022
Accepted: 30 Jun 2022
Journal Details
License
Format
Journal
eISSN
2444-8656
First Published
01 Jan 2016
Publication timeframe
2 times per year
Languages
English
Introduction

With the rapid development of intelligent communication technology, it has been widely used. The original learning mode of one-way acceptance of fixed teachers, fixed classes, fixed content, fixed process and fixed standard will be broken. Instead, it has transferred into a brand-new learning process where students can easily complete personalised and discovery learning in the information transmission of independent selection, reasonable acceptance, scientific processing and timely feedback with the help of computers, networks and other multimedia devices [1,2,3]. This learning mode of discovery will change the receptive learning pattern of classroom-, teacher- and textbook-centred learning.

With the gradual development of computer network technology and the successful application of artificial intelligence (AI) in the field of expert system, the development space of CAI (computer-aided instruction) is also increasing [4]. However, the adaptability of the existing CAI system is very poor that the platform cannot effectively integrate the network learning resources, and the imperfect database resources also bring in new challenges for users’ online learning. In addition, the quality and utilisation rate of online learning websites are generally low. Many websites have many problems in curriculum arrangement, functional architecture, interface design, course update frequency, reception ability and access speed, which cannot meet the actual needs of users [5,6]. The advancement of AI has made digital learning resources develop rapidly, and great changes have taken place in the types of resource carriers, transmission methods and applications, where the knowledge is represented by data, and the resources are stored, transmitted and shared with the power of the Internet [7]. At the same time, AI technology can also be effectively integrated to realise the efficient absorption and utilisation of digital resources, reflect the intellectual value of digital resources, and promote the development of digital resources system.

Learning in the era of AI is to change from traditional maintenance learning to innovative learning, from receptive learning to constructive learning. Therefore, to promote the development of foreign language education and teaching, information technology must be integrated with the English teaching, which is of great significance to the innovation of English teaching mode.

The integration of information resources in language learning

In the network-based English learning platform, learning resources are very important because they provide a rich material basis for learning activities. Generally, there are two kinds of learning resources [8, 9]: special learning resources, such as textbooks, computer operation room, etc.; assisted learning resources, such as video room, science and technology museum, etc. As shown in Figure 1, language learning resources not only take the form of words and pictures, but also include video and listening audio. In addition, it includes not only the resources obtained from the Internet and multimedia computers, but also the use of other channels, such as television, audio-visual equipment, slides, etc. Moreover, It contains resources obtained from some traditional teaching resources (such as textbooks and teaching aids), and includes equipment, applications, institutions, websites and some social and cultural organisations that can obtain the above resources.

Fig. 1

Different forms of language learning.

As shown in Figure 2, language learning resources can be divided into three aspects:

Teaching materials. It refers to the object that can be directly used in learning. Further detailed explanation is consistent with the teaching plan and teaching content, carefully selected can be used in the teaching process, and can also increase all information or organisation of students’ knowledge.

Support system. It refers to all the internal or external conditions that are helpful to learning, including the support of learning motivation, learning tools, learning resources and related personnel.

Learning environment. It includes not only the place where students study, but also the atmosphere produced in the process of learning. The main feature of learning environment is the communication effect produced by the interaction of the three.

Fig. 2

Classification of language learning resource.

Language assisted learning model
Learning resource extraction model

In the field of language recognition, the fusion of LSA and N-gram language models can be used to effectively extract learning resources. The core idea of this method is to adopt the strategy of linear interpolation for the probability of LSA and N-gram, where N-gram language model can predict functional words well, and LSA model can predict long-span texts well.

LSA model

Latent semantic analysis (LSA) is a widely used method in the field of natural language processing [10]. It assumes that if two words appear more than once in the same document, then the two words are semantically similar. In addition, LSA constructs a matrix with a large amount of text materials, where each row of the matrix represents a word, and each column represents a document, and the element represents the number of times the word appears in the document, then singular value decomposition (SVD) is used to reduce the dimension on this matrix, in order to remove the noise and irrelevant information in the data and mine the implied semantic information. Finally, the words are expressed as vectors in K-dimensional implicit semantic space. By calculating the cosine distance of the vector, the similarity between two words is expressed, so as to extract the features of various learning resources.

N-gram model

The generation of a word, a sentence or an article in a natural language is regarded as a random process, in which the basic unit can be regarded as a random variable satisfying a certain probability distribution. The N-gram model takes the left and right consecutive overlapping N words as a unit. According to the context of a certain unit, the unit with the highest probability value is selected as the final result. That is to say, a word appears in the text and has a close relationship with its context. Supposing that text D(w1, w2, w3, ⋯ , wi, ⋯ , wn), in which w1, w2, w3, ⋯ , wi, ⋯ , wn constitutes the context of the word wi. The probability of wi appearing in the text is determined by the probability of occurrence of all words in sequence w1, w2, w3, ⋯ , wi−1, wi+1, ⋯ , wn, so the word wi is strictly constrained. Such a strong constraint leads to too many parameters in calculation, which makes calculation difficult, and the problem of data sparseness is also very serious.

To solve the above problems, Markov hypothesis is usually introduced in the calculation [11, 12]. Markov hypothesis means that if the state of stochastic process at the moment t1 is known, then the process at time t2(t2 > t1) is only relevant to t1, and has no relationship with the state before t1 moment, that is, the “no aftereffect” nature in the random process. Applying this hypothesis to natural language processing can greatly reduce the amount of calculation and the dimension of parameter space, because only a small number of vocabulary sequences need to be considered in the calculation.

The extraction of feature items by N-gram model is of great help to improve the accuracy of text classification. However, when the value of N is greater than 3, or even greater, the relationship between words can be expressed more accurately. However, to improve the accuracy, it is necessary to increase the parameter estimation, which will be more difficult than the low-order model, thus reducing the reliability of the estimated value, thereby negatively affecting the predicted performance.

Integrated model

In the field of language recognition, the integration of LSA and N-gram language models can be used to deal with the missing of text sentences. The core idea of this method is to adopt the strategy of linear interpolation for the probability of LSA and N-gram. N-gram language model can predict functional words well, and LSA model can predict long-span texts well, which is meaningful in realization of sentences.

For LSA, the word-document weight matrix W can be broken down into the form of the product of low quality UV and S: WUSVT. Using singular value decomposition of matrix, the W Resolve into the form of minimum square root error. ijQij2 {{\sum}_{ij}}Q_{ij}^2 , in which = WUSVT. For the definition of LSA, each column of W can be expressed as Formula (1): Wi=USVjT {W_i} = USV_j^T

That is, Wj can be viewed as the dot product of a linear combination of US and a linear combination of VjT V_j^T . When entering a new document, we also use the above method to process it, such as Formula (2): Vj=WjTUS1 {V_j} = W_j^TU{S^{ - 1}}

To integrate LSA with N-gram language model, we adopt the strategy of linear adjustment for the probability of LSA and N-gram, as shown in Formula (3): p(w|H)=αpng(w|H)+(1α)plsa(w|H) p(w|H) = \alpha {p_{ng}}(w|H) + (1 - \alpha ){p_{lsa}}(w|H) where png(w|H) indicates the probability of the word “w” being used with the word “H” in the N-gram language model, which is calculated based on Bayesian formula: png(w|H)=count(w,H)count(H) {p_{ng}}(w|H) = {{{\rm{count}}(w,H)} \over {{\rm{count}}(H)}} where w represents the candidate item, H represents the word before w, count(w, H) represents the number of times the word “w” co-appears with the word “H” and count(H) represents the number of times the word “H” before w appears.

Then we use the LSA model to calculate the probability of combining the words “W” with “H”, as shown in Formula (5): Plsa(w|H)=sim(h,w)mqV(sim(h,q)m) {P_{lsa}}(w|H) = {{s{\rm{im}}(h,w) - m} \over {\mathop {\sum}\nolimits_{q \in V} ({\rm{sim}}(h,q) - m)}} where h is the sum of word vectors before word “w” under LSA, m is the minimum cosine similarity between h and any word W in word list m = minwV sim(h, w), q is also a word in this table V. Since the calculated cosine similarity value may be negative, subtracting the minimum value m ensures that the estimated probability is between 0 and 1.

The resource text generation algorithm based on the fusion of LSA and N-gram language model is as follows:

Input: candidate word, sentence S = {w1, w2, ⋯ , wk}, where w is the word in sentence S, wc is the answer word, and H is the word before the answer word

Calculate the sum of LSA vectors of all words in S, that is LSAH

Find vector m that the vector has the smallest cosine vector between LSAH and wm

Use Formula (4) to calculate png (wc|H), which is the conditional probability of the answer word under N-Gram.

Use Formula (5) to calculate Plsa (wc|H), which is the conditional probability of the answer word under LSA

The two steps above are weighted by Formula (3), and the accumulated similarity value between the correct answer and other words is recorded as Sim1

Use Formula (4) to calculate png (candidate), which is the conditional probability of candidate words under N-Gram.

Use Formula (5) to calculate Plsa (candidate|H), which is the conditional probability of candidate words under LSA

The two steps are weighted by formula (3), and the accumulated similarity value between the correct answer and other words is recorded as Sim2 if Sim1 is larger than Sim2, then the candidate word is the alternative answer of the sentence; Otherwise, the candidate word is not an alternative answer.

Multi-task online learning model

Multi-task learning, that is, learning multiple related tasks at the same time, which makes full use of the internal relationship between tasks and can improve the learning performance of all tasks as a whole.

At present, most of the algorithms proposed for multi-task learning are based on the deformed l1 norm, such as l2,1 norm, and l∞,1 norm. The l2,1 norm is an l2 norm operation on each row of the matrix, followed by an l1 norm operation on the resulting result. It follows that the l2,1 norm is really the sum of the L_2 norms of each row of the matrix. The l∞,1 norm is the sum of the l norm of each row of the matrix. The main advantage of these two norms is that they can not only guarantee the sparsity of rows, but also achieve Group Lasso between features. Since the l2,1 norm is convex, when the loss function is also convex, the learning problem based on the l2,1 norm has a global optimal solution [13]. The same is true for the l∞,1 norm.

Problems of multi-task English learning

Assuming that there are Q English learning tasks from the same space X × Y, each task containing Mq samples, where XRN, YR, N represents the data dimension. Therefore, the Q tasks can be regarded as consisting of the data set S=Uq=1QSq S = U_{q = 1}^Q{S_q} , where Sq={ zmq=(xmq,ymq) }m=1Mq {S_q} = \left\{ {z_m^q = \left( {x_m^q,y_m^q} \right)} \right\}_{m = 1}^{{M_q}} is selected from X × Y according to a potential distribution Pq. The purpose of multi-task learning is to learn Q decision functions fq: RNR, q = 1,…, Q, and makes fq(xmq) {f^q}\left( {x_m^q} \right) approximates ymq y_m^q , which means that the problem degenerates into single-task learning.

Weighted multi-task learning model

In a typical multi-task learning model, the first q decision function of tasks fq are parameterised by weight vector wq, that is, fq(x)=wqTx,q=1,,Q {f^q}(x) = {w^{qT}}x,q = 1, \ldots ,Q

Therefore, all the learned weight vectors form a weight matrix, which is recorded as W=(W11;;WN1)=(WQ1,,WQQ) W = \left( {{W_{11}}; \ldots ;{W_{N1}}} \right) = \left( {{W_{Q1}}, \ldots ,{W_{QQ}}} \right)

The goal of multi-task learning is to get the weight matrix by solving the regularization empirical risk minimization problem. minWRN×Qq=1Q1Mqm=1Mqlq(Wmq,zmq)+λR(W) \mathop {\min }\limits_{W \in {R^{N \times Q}}} \mathop {\sum}\limits_{q = 1}^Q {1 \over {{M_q}}}\mathop {\sum}\limits_{m = 1}^{{M_q}} {l^q}\left( {{W_{mq}},z_m^q} \right) + \lambda R(W) where lq(Wmq,zmq) {l^q}\left( {{W_{mq}},z_m^q} \right) is the loss of the q-th task on sample zmq z_m^q , R(W) is the matrix norm regularization operator, and λ is the regularised parameter. Both the loss function and the regular term are assumed to be convex. Commonly used regularisation operators are:

l1,1 Norm, q=1QWmq1 {\sum}_{q = 1}^Q\parallel {W_{mq}}{\parallel _1} , just simply find the l1 norm for all the weight vectors and then sum them up to obtain the sparse solution, but it cannot find the information throughout all the tasks.

l2,1 Norm, j=1NWjN2 {\sum}_{j = 1}^N\parallel {W_{jN}}{\parallel _2} . This norm is based on the strength of all task input variables to select features, unlike single task learning, which is based on a single input variable. And the choice of P depends on the degree of sharing of features during learning. P = 1 means no sharing, while P = ∞ means all sharing.

In this paper, a weighted l2,1 norm, R(W)=j=1NσjWj2 R(W) = {\sum}\nolimits_{j = 1}^N{\sigma _j}\parallel {W_j}{\parallel _2} , so as to improve the sparsity of learning variables, wherein σ1,…, σj are greater than zero. Therefore, the corresponding weighted multi-task feature selection model can be expressed as follows: minWRN×Qq=1Q1Mqm=1Mqlq(Wmq,zmq)+λj=1NσjWjN2 \mathop {\min }\limits_{W \in {R^{N \times Q}}} \mathop {\sum}\limits_{q = 1}^Q {1 \over {{M^q}}}\mathop {\sum}\limits_{m = 1}^{{M^q}} {l^q}\left( {{W_{mq}},z_m^q} \right) + \lambda \mathop {\sum}\limits_{j = 1}^N {\sigma _j}\parallel {W_{jN}}{\parallel _2}

The optimal solution algorithm is as follows:

Enter: W0RN×Q, 0 = ON×Q, σ1 = 1N×1, W1W0, λ, γ, ∈, strongly convex function h(W), non-negative non-decreasing sequence {βt}

for t = 1,…, T do

For each sample of Q tasks (zt1,,ztQ) \left( {z_t^1, \ldots ,z_t^Q} \right) , given lt, calculate its gradient Gt ∈ ∂ lt on Wt

Update average gradient Gt=t1tG¯t1+1tGt {\vec G_t} = {{t - 1} \over t}{\bar G_{t - 1}} + {1 \over t}{G_t}

Calculate the next iteration Wt+1=argminWRN×Q{ G¯tW+λR(W)+βth(W) } {W_{t + 1}} = \mathop {\rm argmin }\limits_{W \in {R^{N \times Q}}} \left\{ {\bar G_t^ \top W + \lambda R(W) + {\beta _t}h(W)} \right\}

Update weight coefficient (σj)t+1 = 1 / (‖(WjR)t2+ ∈), j = 1, 2, …, N

end

Output: wT

Design of online language learning system
Demand analysis

Based on the analysis of the application of information resource integration in language learning, multi-model integration is adopted to extract learning resources, and then the functional requirements of this system are put forward. It mainly provides a personalised learning platform for students, a personalised teaching platform for teachers, and uses the system to complete English learning tasks in the actual classroom, which realises the combination of online learning and class learning, provides teachers with a teaching mode of learning tracking analysis, and brings convenience for students to learn English anytime and anywhere.

English learning based on learning tracking analysis is mainly through checking the last learned words, checking the course summary, checking the test results, synchronise the learning space in English learning courses, vocabulary test, checking the learning plan in preparing for the exam and checking the progress in teacher management to realise the tracking and analysis of learning. For this reason, the functions of the English learning system are divided as shown in Figure 3:

Fig. 3

Division of systematic functions.

Therefore, the language learning system is mainly divided into four businesses, namely, English vocabulary learning, English synchronous learning courses, test preparation learning and teacher management business, which needs to do the following in the design and implementation stage:

The system should provide practice, test and feedback for each learning content of the practice unit, and make a reasonable study plan for students;

It is necessary to provide a way to combine the learning content and training of English series;

Optimisation of learning path should be considered in the system;

Try to provide students with personalised and expandable English learning methods and provide different types of tests.

Overall design

Designing the overall structure of the system is conducive to a better grasp of the system architecture. The overall structure of the online English learning system proposed in this paper is shown in Figure 4.

Fig. 4

Overall design of the system.

In the overall structure of English learning system based on learning tracking analysis, the system is mainly divided into user layer, page layer, application layer, logic layer and data layer. The user layer is divided into students and teachers. The page mainly includes English vocabulary learning page, English synchronous learning course page, exam preparation learning page and teacher management business page. The application layer mainly designs English vocabulary learning operation, English synchronous learning course operation and teacher management business operation; The search layer mainly designs English vocabulary learning and processing, English synchronous learning curriculum processing and processing; while data layer is the database of English learning system based on learning tracking analysis.

Architecture design

The module of English learning system designs English vocabulary learning, English synchronous learning courses, exam preparation learning and teacher management.

Vocabulary learning

As shown in Figure 5, the English vocabulary learning module designs personal information management, course selection, learning words, test centre, comprehensive practice, grammar learning and leaving the classroom. Personal information management includes viewing basic information, testing records and learning records. Course selection includes viewing all courses, selected courses, newly learned words, previously known words, remaining words and last learned words; Learning words includes entering the study, handing in papers, starting the test, checking the course summary and checking the test results. The test also includes spelling practice, synchronous practice and comprehensive evaluation; Comprehensive exercises include course synchronous reading and extracurricular extended reading; Grammar also designed to check the list of grammar knowledge points, check specific knowledge points and unit quizzes.

Fig. 5

Classification of functions in vocabulary learning.

Teacher management business

As shown in Figure 6, the teacher management business module has designed the student roster, progress inquiry, grade checking, information sending, personal information modification and data statistics. Among them, the student roster includes modifying student information, viewing test records, refreshing courses and viewing course learning records; the progress inquiry includes selecting courses; message sending includes check my message, check the sent message.

Fig. 6

Classification of functions of teacher management business.

English synchronous learning course

As shown in Figure 7, the course module of English synchronous learning is designed to check the learning guide, check the directory index of knowledge map, pronunciation and vocabulary, topic and expression, reading and writing, grammar, comprehensive practice, check the result report, question and answer, vocabulary test and learning space. Among them, pronunciation and vocabulary include learning words and entering vocabulary learning; topic and expression includes playing topic and expression, viewing text, submitting topic and expression; reading and writing includes playing reading and writing, submitting reading and writing; grammar includes playing grammar and submitting grammar; comprehensive exercises include single choice, vocabulary discrimination, reading comprehension and cloze test. Learning space includes personal information, teachers’ teaching plans, class learning records and my learning records.

Fig. 7

Classification of English synchronous learning functions.

Exam preparation learning

As shown in Figure 8, the study preparation module designs personal information, study records and study plans. Among them, personal information includes modifying personal information; learning records include learning vocabulary of the curriculum standard, learning core vocabulary, vocabulary review, English listening, cloze test, reading comprehension, English writing and comprehensive review. Learning plans include checking my learning plans.

Fig. 8

Classification of preparation learning functions.

Conclusion

The use of online English learning platform is helpful to reform the traditional teaching mode and provide personalised learning media for students. Based on the integration of LSA and N-gram language model, this paper puts forward an English learning resource extraction algorithm, and proposes a weighted multi-task learning model and its optimal solution for multi-task English learning, which improves the sparseness of learning variables. In addition, according to the language assisted learning model, the demand analysis, overall business and architecture of the online language learning system are designed, which mainly consists of four core modules: English vocabulary learning, English synchronous learning courses, exam preparation learning and teacher management.

Fig. 1

Different forms of language learning.
Different forms of language learning.

Fig. 2

Classification of language learning resource.
Classification of language learning resource.

Fig. 3

Division of systematic functions.
Division of systematic functions.

Fig. 4

Overall design of the system.
Overall design of the system.

Fig. 5

Classification of functions in vocabulary learning.
Classification of functions in vocabulary learning.

Fig. 6

Classification of functions of teacher management business.
Classification of functions of teacher management business.

Fig. 7

Classification of English synchronous learning functions.
Classification of English synchronous learning functions.

Fig. 8

Classification of preparation learning functions.
Classification of preparation learning functions.

Cui Ying. Research on the path of college English information teaching based on “internet plus” thinking [J]. Educational Theory and Practice, 2020, 40(27): 56–58. (in Chinese). CuiYing Research on the path of college English information teaching based on “internet plus” thinking [J]. Educational Theory and Practice 2020 40 27 56 58 (in Chinese). Search in Google Scholar

Yang Lifang. Application of Mobile Learning in College English Vocabulary Learning [J]. Foreign Language Audio-visual Teaching, 2012, (4): 54–58. (in Chinese). YangLifang Application of Mobile Learning in College English Vocabulary Learning [J]. Foreign Language Audio-visual Teaching 2012 4 54 58 (in Chinese). Search in Google Scholar

Liu Hongmei, Jiang Xiaoyu. Design and Practice of College English Teaching Based on WeChat Platform [J]. Foreign Languages, 2015, 31(2): 138–143. (in Chinese). LiuHongmei JiangXiaoyu Design and Practice of College English Teaching Based on WeChat Platform [J]. Foreign Languages 2015 31 2 138 143 (in Chinese). Search in Google Scholar

Liu Fang. Literature Review of the Influence of CAI on English Vocabulary Learning [J]. Overseas English, 2021, (4): 130–131. (in Chinese). LiuFang Literature Review of the Influence of CAI on English Vocabulary Learning [J]. Overseas English 2021 4 130 131 (in Chinese). Search in Google Scholar

Wen-hui Peng, Yang Zongkai, Liu Qingtang. Research on the conceptual model construction of online learning behavior system [J]. China Audio-visual Education, 2013, (09): 39–46. (in Chinese). Wen-huiPeng YangZongkai LiuQingtang Research on the conceptual model construction of online learning behavior system [J]. China Audio-visual Education 2013 09 39 46 (in Chinese). Search in Google Scholar

Wang Yanjie. Application Research and Operation Suggestions of Network Information Teaching System Based on CAI Resource Library [J]. Information and Computer (Theoretical Edition), 2019, 31(22): 219–222. (in Chinese). WangYanjie Application Research and Operation Suggestions of Network Information Teaching System Based on CAI Resource Library [J]. Information and Computer (Theoretical Edition) 2019 31 22 219 222 (in Chinese). Search in Google Scholar

Teresa Teng. Research on Mobile Mini-learning System of English Vocabulary Based on Internet+Education Mode [J]. Automation Technology and Application, 2020, 39(03): 45–47. (in Chinese). TeresaTeng Research on Mobile Mini-learning System of English Vocabulary Based on Internet+Education Mode [J]. Automation Technology and Application 2020 39 03 45 47 (in Chinese). Search in Google Scholar

Prior D D, Mazanov J, Meacheamd, et al. Attitude, digital literacy and self efficacy: Flow on effects for online learning behavior [J]. Internet & Higher Education, 2016, 29: 91–97. PriorD D MazanovJ Meacheamd Attitude, digital literacy and self efficacy: Flow on effects for online learning behavior [J]. Internet & Higher Education 2016 29 91 97 10.1016/j.iheduc.2016.01.001 Search in Google Scholar

McCarthy D, Navigli R. Semeval-2007 task 10: English lexical substitution task [C], Proceedings of the 4th International Workshop on Semantic Evaluations. Association for Computational Linguistics, 2007: 48–53. McCarthyD NavigliR Semeval-2007 task 10: English lexical substitution task [C], Proceedings of the 4th International Workshop on Semantic Evaluations Association for Computational Linguistics 2007 48 53 10.3115/1621474.1621483 Search in Google Scholar

Ali Hassani, Amir Iranmanesh, Najme Mansouri. Text mining using nonnegative matrix factorization and latent semantic analysis [J]. Neural Computing and Applications, 2021, 33(20). HassaniAli IranmaneshAmir MansouriNajme Text mining using nonnegative matrix factorization and latent semantic analysis [J]. Neural Computing and Applications 2021 33 20 10.1007/s00521-021-06014-6 Search in Google Scholar

Alam M, UzZaman N, Khan M. N-gram based statistical grammar checker for Bangla and English [J]. 2007. AlamM UzZamanN KhanM N-gram based statistical grammar checker for Bangla and English [J]. 2007 Search in Google Scholar

Yin, Wu Min. Overview of N-gram model [J]. Computer System Application, 2018, 27(10): 33–38. (in Chinese). Yin WuMin Overview of N-gram model [J]. Computer System Application 2018 27 10 33 38 (in Chinese). Search in Google Scholar

Y. Bai, M. Tang. Object Tracking via Robust Multitask Sparse Representation [J]. IEEE Signal Processing Letters, 2014, 21(8): 909–913. BaiY. TangM. Object Tracking via Robust Multitask Sparse Representation [J]. IEEE Signal Processing Letters 2014 21 8 909 913 10.1109/LSP.2014.2320291 Search in Google Scholar

Recommended articles from Trend MD

Plan your remote conference with Sciendo