1. bookAHEAD OF PRINT
Informacje o czasopiśmie
License
Format
Czasopismo
eISSN
2444-8656
Pierwsze wydanie
01 Jan 2016
Częstotliwość wydawania
2 razy w roku
Języki
Angielski
Otwarty dostęp

Chinese-English Contrastive Translation System Based on Lagrangian Search Mathematical Algorithm Model

Data publikacji: 15 Jul 2022
Tom & Zeszyt: AHEAD OF PRINT
Zakres stron: -
Otrzymano: 07 Mar 2022
Przyjęty: 09 May 2022
Informacje o czasopiśmie
License
Format
Czasopismo
eISSN
2444-8656
Pierwsze wydanie
01 Jan 2016
Częstotliwość wydawania
2 razy w roku
Języki
Angielski
Introduction

The current research methods of machine translation mainly include rule-based methods, statistics-based methods, and instance-based methods. Statistics-based methods are increasingly becoming a hotspot in machine translation research—this paper studies Chinese-English machine translation based on Lagrangian mathematical algorithm. We have studied and improved Och's Lagrangian algorithm [1]. When the article expands the node, it expands on the inspiration node and expands on other nodes. This avoids removing assumptions with flawed heuristics but more reasonable ones. We apply this improved algorithm to the Chinese-English translation system. The experimental results show that the enhanced Lagrangian algorithm can obtain better translation results and search efficiency in the system.

The statistical machine translation system

The current statistical machine-translation method refers to the statistical machine-translation method based on the source-channel model [2]. The source-channel model treats the translation problem as restoring the signal from a noisy channel. It translates a given source language text (word sequence) f1J=f1fjfI f_1^J = {f_1} \cdots {f_j} \cdots {f_I} into a target language text (word sequence) e1I=e1eieI e_1^I = {e_1} \cdots {e_i} \cdots {e_I} . This paper finds the target language text with the highest probability value as the final translation text among all possible target language texts. Statistical machine translation needs to solve three problems: parameter estimation of language model Pr(e1I) {P_r}\left({e_1^I} \right) ; parameter estimation of translation model Pr(f1J|e1I) {P_r}\left({f_1^J|e_1^I} \right) ; construction of decoding search algorithm to find the optimal translation. Therefore, when building a statistical machine translation system, we must first construct a language model and a translation model through the training corpus. Then the article makes the decoder (search algorithm) based on both [3]. The Chinese-English translation architecture is shown in Figure 1.

Figure 1

The operational structure of statistical machine translation

We study Chinese-English statistical machine translation based on Lagrangian mathematical algorithm. We adopt Lagrangian for two main reasons: 1) Compared with other alignment models, the alignment effect of Model 4 is better. 2) The dependence of the model on the warp rate in the target language can integrate n-gram language models during the search process.

Construction of translation model and language model

We compute the parts of speech for Chinese and English based on Lagrangian. We first use MKCLS to generate parts of speech; then use GIZA++ to train these corpora to obtain Lagrangian parameters (Table 1).

Parameter information and scale of translation models

Parameter Number
English part of speech C(e) 129786
Chinese part of speech C(c) 145083
P(count, C(e), C(c)) twist rate 73888
P(e|c) back translation rate 331796
P(c|e) translation rate 331796
P(⊄ |e) reproduction rate 129775

The English corpus is used as the training corpus for our language model in Chinese-English translation [4]. We use the CMU training corpus. This paper uses the ternary language model to obtain the data shown in Table 2.

Parameters and scale of language models

Parameter Unary model parameters Binary model parameters Ternary model parameters
Number 20000 173894 355043
Design and Implementation of Lagrangian Search Algorithm in Statistical Machine Translation
Lagrangian Search Algorithm

This section will construct the Boussineq equation with perturbation term using the partial Lagrangian method [5]. The allowed Lagrangian functions are L=12ut212ux2uux2+12uxx2 L = - {1 \over 2}u_t^2 - {1 \over 2}u_x^2 - uu_x^2 + {1 \over 2}u_{xx}^2

The approximate Euler-Lagrange-type equation is δLδu=ε(u+ux)ux2 {{\delta L} \over {\delta u}} = \varepsilon \left({u + {u_x}} \right) - u_x^2

The determination equation of L approximating the Noether symmetric operator X0 + εX1 is: (X0+εX1)L+Di(ξ0i+εξ1i)L=[(η0ξ0juj)+ε(η1ξ1juj)](ε(u+ux)ux2)+Di(B0i+εB1i) \left({{X_0} + \varepsilon {X_1}} \right)L + {D_i}\left({\xi _0^i + \varepsilon \xi _1^i} \right)L = \left[{\left({\eta 0 - \xi _0^j{u_j}} \right) + \varepsilon \left({{\eta _1} - \xi _1^j{u_j}} \right)} \right]\left({\varepsilon \left({u + {u_x}} \right) - u_x^2} \right) + {D_i}\left({B_0^i + \varepsilon B_1^i} \right)

In X0+εX1=(ξ01+εξ11)t+(ξ02+εξ12)x+(η0+εη1)u+ζ0ut+ζ1ux+ζ11uxx \left({{X_0} + \varepsilon {X_1}} \right)L + {D_i}\left({\xi _0^i + \varepsilon \xi _1^i} \right)L = \left[{\left({\eta 0 - \xi _0^j{u_j}} \right) + \varepsilon \left({{\eta _1} - \xi _1^j{u_j}} \right)} \right]\left({\varepsilon \left({u + {u_x}} \right) - u_x^2} \right) + {D_i}\left({B_0^i + \varepsilon B_1^i} \right) ζ0=η0t+η0uut(ξ0t1+ξ0u1ut)ut(ξ0t2+ξ0u2ut)uxε[η1t+η1uut(ξ1t1+ξ1u1ut)ut(ξ1t2+ξ1u2ux)uxξ11uttξ12utx]+ε(ξ11utt+ξ12utx) {X_0} + \varepsilon {X_1} = \left({\xi _0^1 + \varepsilon \xi _1^1} \right){\partial \over {\partial t}} + \left({\xi _0^2 + \varepsilon \xi _1^2} \right){\partial \over {\partial x}} + \left({{\eta _0} + \varepsilon {\eta _1}} \right){\partial \over {\partial u}} + {\zeta _0}{\partial \over {\partial {u_t}}} + {\zeta _1}{\partial \over {\partial {u_x}}} + {\zeta _{11}}{\partial \over {\partial {u_{xx}}}} ζ1=η0x+η0uux(ξ0x1+ξ0u1ux)ut(ξ0x2+ξ0u2ux)uxε[η1x+η1uux(ξ1x1+ξ1u1ux)ux(ξ1x2+ξ1u2ux)uxξ11utxξ12uxx]+ε(ξ11uxt+ξ12uxx) \eqalign{& {\zeta _0} = \eta 0t + \eta {0_u}{u_t} - \left({\xi _{0t}^1 + \xi _{0u}^1{u_t}} \right){u_t} - \left({\xi _{0t}^2 + \xi _{0u}^2{u_t}} \right){u_x} - \varepsilon \left[{{\eta _{1t}} + {\eta _{1u}}{u_t} - \left({\xi _{1t}^1 + \xi _{1u}^1{u_t}} \right){u_t} -} \right. \cr & \left. {\left({\xi _{1t}^2 + \xi _{1u}^2{u_x}} \right){u_x} - \xi _1^1{u_{tt}} - \xi _1^2{u_{tx}}} \right] + \varepsilon \left({\xi _1^1{u_{tt}} + \xi _1^2{u_{tx}}} \right) \cr}

Scoring Nodes

Each node is a hypothesis H about translation at a particular moment, and H can be described as follows [6]. The length of the target sentence is l; the prefix word e1, e2ek of the target sentence. So a hypothesis can be expressed as H = l: e1, e2ek. Its score is fH. It consists of two parts: the score gH for the prefix e1, e2ek and the score hH for the suffix ek+1, ek+2e1.

(1) Prefix score gH

We build on Lagrangian mathematical algorithms and ternary language models: gH=log(sH)+i=1klogp(ei|ei2,ei1) gH = \log \left({sH} \right) + \sum\limits_{i = 1}^k {\log p\left({{e_i}|{e_{i - 2}},{e_{i - 1}}} \right)}

Where sH is calculated as: sHi=1kn(ϕi|ei)×i=1km=1ϕit(τim|ei)×i=1,ϕi>0kd1(πi1cpi|class(epi),class(τi1))×i=1km=1ϕid>1(πimπi(m1)|class(τim))×(mϕ0ϕ0)p1ϕ0(1p1)m2ϕ01mϕ0×k=1ϕ0t(π0k|NULL) \eqalign{& sH\prod\limits_{i = 1}^k {n\left({{\phi _i}|{e_i}} \right) \times \prod\limits_{i = 1}^k {\prod\limits_{m = 1}^{{\phi _i}} {t\left({{\tau _{im}}|{e_i}} \right) \times \prod\limits_{i = 1,{\phi _i} > 0}^k {{d_1}\left({{\pi _{i1}} - {c_{pi}}|{class}\left({{e_{pi}}} \right),{class}\left({{\tau _{i1}}} \right)} \right) \times}}}} \cr & \prod\limits_{i = 1}^k {\prod\limits_{m = 1}^{{\phi _i}} {d > 1\left({{\pi _{im}} - {\pi _{i\left({m - 1} \right)}}|{class}\left({{\tau _{im}}} \right)} \right) \times \left({\matrix{{m - {\phi _0}} \cr {{\phi _0}} \cr}} \right)p_1^{{\phi _0}}{{\left({1 - {p_1}} \right)}^{m - 2{\phi _0}}}{1 \over {{m^{{\phi _0}}}}} \times \prod\limits_{k = 1}^{{\phi _0}} {t\left({{\pi _{0k}}|NULL} \right)}}} \cr}

(2) Suffix score hH HX(n)=ΠfC(n)hX(j) {H^X}\left(n \right) = \mathop \Pi \limits_{f \notin C\left(n \right)} {h^X}\left(j \right)

Where X represents the use of different probabilities. C(n) is the set of positions that have covered the source language word. h(j) uses heuristic functions. Calculated according to the following situations:

hT(j)=maxeP(fj|e) {h^T}\left(j \right) = \mathop {\max}\limits_e P\left({{f_j}|e} \right) if only the translation rate is considered.

The presence of hTF(j)=max{maxee0P(fj|e)ϕp(ϕ|e),P(f|e0)} {h^{TF}}\left(j \right) = \max \left\{{\mathop {\max}\limits_{e \ne {e_0}} P{{\left({{f_j}|e} \right)}^\phi}\sqrt {p\left({\phi |e} \right)},P\left({f|{e_0}} \right)} \right\} after the introduction of the reproductive rate.

pL(e)=maxu,vP(e|u,v) {p^L}\left(e \right) = \mathop {\max}\limits_{u,v} P\left({e|u,v} \right) exists after the introduction of the language model. After using the ternary model, we get hTFL(j)=maxθφ{t(fj|e)×n(ϕ|e)×pL(e)φ,t(f|e0)} {h^{TFL}}\left(j \right) = \mathop {\max}\limits_{\theta \varphi} \left\{{t\left({{f_j}|e} \right) \times \root \varphi \of {n\left({\phi |e} \right) \times {p^L}\left(e \right)},t\left({f|{e_0}} \right)} \right\}

If only the twist rate is considered, hD(j)=maxjc(n),EP(jj|E,C(fj)) {h^D}\left(j \right) = \mathop {\max}\limits_{j^{'} \notin c\left(n \right),E} P\left({j - {j^{'}}|E,C\left({{f_j}} \right)} \right) exists. We take the following formula in the algorithm: hH=jC(n)hTFL(j)×hD(j) hH = \prod\limits_{j \notin C\left(n \right)} {{h^{TFL}}\left(j \right) \times {h^D}\left(j \right)}

Search Process of Lagrangian Algorithm Based on Heuristic Function

The following takes Chinese-English translation as an example to describe the search process. The search target is f1, f2, ⋯, fJ for a given Chinese sentence (afterword segmentation). J is the number of words in the sentence. We are looking for a more suitable English translation.

Preprocessing before searching

While all words in the English vocabulary could be marked as candidates for each fj, this would be very inefficient in practice. So the model must be limited within the allowable range. We can use the back translation rate P(ei|fj) to find candidate words ei for each fj. We first sort all P(ei|fj) from largest to smallest [7]. At this point, we see the maximum value M=maxei{P(ei|fj)} M = \mathop {\max}\limits_{{e_i}} \{P({e_i}|{f_j})\} . We select ei: 1) P(ei|fj) ≥ c × M according to the following criteria: 2) The total number of selected ei does not exceed n. Among them, these two parameters, c, n, need to be selected in the experiment according to different languages. c = 0.25, n = 10 is set based on our observations and experience in investigations. Calculate the initial score for each Chinese word according to the heuristic function, as shown in formula (11). These two steps can also be implemented in the search, but it will take a lot of time to calculate each time, figuring it in advance.

Search Steps

The steps of the Lagrangian algorithm search are as follows: 1) Expand the optimal node according to the score. This score includes the prefix score and the heuristic score. 2) Without considering the heuristic score, expand other nodes according to the size of the prefix score.

When expanding, we expand the node with the best score and extend the node at the same level in the search graph [8]. The number of Chinese words they cover is the same as the optimal node. Better hypotheses can be found by doing a partial width search. The search algorithm is as follows:

The Open queue and Closed queue are initialized to empty. C(n) is the position set of covered Chinese words C(n){1,2,,J}(*inputX=(1,2,,J},Y=f1,f2,,fJ},n) C\left(n \right) \subseteq \left\{{1,2, \cdots,J} \right\}(*inputX = (1,2, \cdots,J\},Y = {f_1},{f_2}, \cdots,{f_J}\},n)

(1) Initialize the hypothesis queue

Forj = 1 to J

For every fj Forall Candidate English word eido

We create new hypotheses. We put it into the Open queue. The position of the previous translated Chinese word is set to k′ = 0; C(n) = {j};

ancestor = NULL

We calculate the prefix score gH according to equation (3). We calculate the heuristic function value hH, fH = gH + hH according to equations (5) and (6). All hypotheses are sorted in descending order of score fH.

(2) Start searching

While |open|< Ldo

1) Expand the optimal node. We assume that the maximum score in the Open queue is m. We'll label it Closed. c = |C(n)|.

ForalljXjC(n) ∧ c < Jdo

We connect each candidate word ei of fj to m. We create a new hypothesis and label it

Open ancestor = m, C(n) = C(n) ∪ {j}.

The calculated prefix score is gH, and the computed suffix score is hH, fH = gH + hH.

2) Partial width search

Ifc > = J / 2+1

For all open. If |h, C(n)| == c, then sort by prefix score size.

For all jXjh, C(n)do

Please create a new hypothesis and mark it as Open.

3) Determine whether to terminate

If |C(n)| = J

Then search for Break. We output the hypothesis with the most significant score in |C(n)| = J.

At this point, we output the English translation.

4) All hypotheses marked as Open are sorted in descending order of their scores.

Experiments and Analysis

We verify the algorithm through experiments. The evaluation of the experimental results in this paper is done manually [9]. The evaluation standard is whether the translation result can convey the original text's meaning. L is the maximum length of the hypothetical queue. c is a multiple of the maximum back translation rate for each Chinese word. n is the maximum number of candidate English words per Chinese word. d is the number of extension nodes added.

Experiment 1

We selected 500 Chinese sentences with grammatical norms. The average number of words is 6.3. Test the effect of the maximum number of hypotheses acceptable in the hypotheses cohort on translation performance under normal circumstances [10]. The test data are shown in Table 3 (c=0.25, n=10, d=5).

Hypothetical effect of column size L on translation

Suppose the maximum length of the queue is N 500 1000 1500 2000
Time spent translating a sentence 11.34s 14.65s 24.89s 39.64s
Evaluation results 40% 50.50% 50.56% 50.56%

The heuristic we employ is guaranteed to overestimate the remaining cost. We see that when L=1000 is a more appropriate setting.

Experiment 2

In Chinese-English translation, we need to set the size of the parameter when selecting candidate English words for each Chinese word [11]. We tested 5000 grammatically canonical Chinese sentences. The average number of words included is 6.7. The experimental results are shown in Table 4 and Table 5 (L=1000, d=5).

When n=20, the effect of the size of c on translation

The multiple c of the maximum back translation rate 0.01 0.1 0.2 0.25 0.3
Average time spent per sentence 18.65s 17.25s 14.76s 14.68s 13.12s
Evaluation results 43.30% 50.18% 50.25% 50.40% 50.10%

When n=10, the effect of the size of c on translation

The multiple c of the maximum back translation rate 0.01 0.1 0.2 0.25 0.3
Average time spent per sentence 18.35s 17.15s 14.73s 14.62s 13.08s
Evaluation results 44.60% 50.20% 50.34% 50.40% 50.10%

In Chinese-English translation, a Chinese word fj may be expressed as multiple English translations ei, i = 1, 2, ⋯, i. It may be greater than 10. But this situation is rare in Chinese and English. It is enough for us to choose 10. Too small c will bring many wrong candidate words. These words may interfere with subsequent translations. If c is too large, many possible correct words will be missed, and the expression of the meaning of the translation will be affected. We found that the quality and efficiency of translation were relatively satisfactory at c=0.25, n=10.

Experiment 3

Chinese and English are very different languages. Although we can filter out most of the completely wrong translations for each Chinese word through Experiment 2, other differences are not well resolved by the heuristic function alone. So we added extended nodes to make the system full of options [12]. The corpus we selected is the same as in Experiment 2. The experimental results are shown in Table 6 (L=1000, c=0.25, n=10).

The effect of increasing the number of extended nodes d on translation

Increase the number of extension nodes d 0 (only the best node) 5 10
Average time spent per sentence 6.02s 14.65s 22.87s
Evaluation results 40.10% 50.40% 49.80%

Some hypotheses have high heuristic scores, but the translations obtained by searching in this direction are not good. For example, the correct word translation rate in a particular translation obtained may be relatively high [13]. The order between words is far from what is expected. This happens because the heuristic we use always overestimates the remaining cost. We always estimate where the distortion is most significant for the distorted case. Sometimes, there is not such a significant distortion. Therefore, the translation obtained by searching in this direction may be far from the correct result.

Consequently, ly expand the optimal noandlso expand other nodes according to the prefix score. This gives search more options. This will alleviate the above situation. If too many nodes are developed, the search time will be prolonged. Too few, and you may miss better hypotheses. We see that scaling up to five nodes is a good choice with a limited hypothetical number of 1000.

Conclusion

We have studied and improved a Lagrangian search algorithm based on a Lagrangian mathematical algorithm. We apply the improved algorithm to Chinese-English machine translation. In the experiment, we found that statistical machine translation has higher accuracy in finding the target word because it considers the relationship between words. Since better translations may be missed by the heuristic search alone, we conduct a partial width search along with the Lagrangian tracking. Experiments show that the quality and efficiency of translation are relatively more satisfactory. Through experiments, we realize that statistical machine translation is somewhat dependent on a corpus, consuming a lot of time and space. Therefore, we spend more time on word reorganization in the search.

Figure 1

The operational structure of statistical machine translation
The operational structure of statistical machine translation

When n=20, the effect of the size of c on translation

The multiple c of the maximum back translation rate 0.01 0.1 0.2 0.25 0.3
Average time spent per sentence 18.65s 17.25s 14.76s 14.68s 13.12s
Evaluation results 43.30% 50.18% 50.25% 50.40% 50.10%

Hypothetical effect of column size L on translation

Suppose the maximum length of the queue is N 500 1000 1500 2000
Time spent translating a sentence 11.34s 14.65s 24.89s 39.64s
Evaluation results 40% 50.50% 50.56% 50.56%

When n=10, the effect of the size of c on translation

The multiple c of the maximum back translation rate 0.01 0.1 0.2 0.25 0.3
Average time spent per sentence 18.35s 17.15s 14.73s 14.62s 13.08s
Evaluation results 44.60% 50.20% 50.34% 50.40% 50.10%

Parameters and scale of language models

Parameter Unary model parameters Binary model parameters Ternary model parameters
Number 20000 173894 355043

Parameter information and scale of translation models

Parameter Number
English part of speech C(e) 129786
Chinese part of speech C(c) 145083
P(count, C(e), C(c)) twist rate 73888
P(e|c) back translation rate 331796
P(c|e) translation rate 331796
P(⊄ |e) reproduction rate 129775

The effect of increasing the number of extended nodes d on translation

Increase the number of extension nodes d 0 (only the best node) 5 10
Average time spent per sentence 6.02s 14.65s 22.87s
Evaluation results 40.10% 50.40% 49.80%

Bowker, L. Chinese speakers' use of machine translation as an aid for scholarly writing in English: a review of the literature and a report on a pilot workshop on machine translation literacy. Asia Pacific Translation and Intercultural Studies., 2020; 7(3): 288–298 BowkerL. Chinese speakers' use of machine translation as an aid for scholarly writing in English: a review of the literature and a report on a pilot workshop on machine translation literacy Asia Pacific Translation and Intercultural Studies 2020 7 3 288 298 10.1080/23306343.2020.1805843 Search in Google Scholar

Zhang, B., Xiong, D., Xie, J., & Su, J. Neural machine translation with GRU-gated attention model. IEEE transactions on neural networks and learning systems., 2020; 31(11): 4688–4698 ZhangB. XiongD. XieJ. SuJ. Neural machine translation with GRU-gated attention model IEEE transactions on neural networks and learning systems 2020 31 11 4688 4698 10.1109/TNNLS.2019.295727631902775 Search in Google Scholar

Zhang, T., Huang, H., Feng, C., & Wei, X. Similarity-aware neural machine translation: reducing human translator efforts by leveraging high-potential sentences with translation memory. Neural Computing and Applications., 2020; 32(23): 17623–17635 ZhangT. HuangH. FengC. WeiX. Similarity-aware neural machine translation: reducing human translator efforts by leveraging high-potential sentences with translation memory Neural Computing and Applications 2020 32 23 17623 17635 10.1007/s00521-020-04939-y Search in Google Scholar

Ye, Z. Polyseme Transfer in the Chinese to English Machine Translation Output and Chinese Students' English Writing. International Journal of TESOL Studies., 2021; 3(2): 88–105 YeZ. Polyseme Transfer in the Chinese to English Machine Translation Output and Chinese Students' English Writing International Journal of TESOL Studies 2021 3 2 88 105 Search in Google Scholar

Lin, J., Liu, Y., & Cleland-Huang, J. Information retrieval versus deep learning approaches for generating traceability links in bilingual projects. Empirical Software Engineering., 2022; 27(1): 1–33 LinJ. LiuY. Cleland-HuangJ. Information retrieval versus deep learning approaches for generating traceability links in bilingual projects Empirical Software Engineering 2022 27 1 1 33 10.1007/s10664-021-10050-0 Search in Google Scholar

Li, X., Xu, C., Wang, X., Lan, W., Jia, Z., Yang, G., & Xu, J. COCO-CN for cross-lingual image tagging, captioning, and retrieval. IEEE Transactions on Multimedia., 2019; 21(9): 2347–2360 LiX. XuC. WangX. LanW. JiaZ. YangG. XuJ. COCO-CN for cross-lingual image tagging, captioning, and retrieval IEEE Transactions on Multimedia 2019 21 9 2347 2360 10.1109/TMM.2019.2896494 Search in Google Scholar

Wang, L. Influences of Language Shift on Speech Fluency in Memory Production of Unbalanced Chinese-English Bilinguals. Theory and Practice in Language Studies., 2022; 12(2): 375–381 WangL. Influences of Language Shift on Speech Fluency in Memory Production of Unbalanced Chinese-English Bilinguals Theory and Practice in Language Studies 2022 12 2 375 381 10.17507/tpls.1202.21 Search in Google Scholar

Gençoğlu, M. & Agarwal, P. Use of Quantum Differential Equations in Sonic Processes. Applied Mathematics and Nonlinear Sciences., 2021; 6(1): 21–28 GençoğluM. AgarwalP. Use of Quantum Differential Equations in Sonic Processes Applied Mathematics and Nonlinear Sciences 2021 6 1 21 28 10.2478/amns.2020.2.00003 Search in Google Scholar

Rahaman, H., Kamrul Hasan, M., Ali, A. & Shamsul Alam, M. Implicit Methods for Numerical Solution of Singular Initial Value Problems. Applied Mathematics and Nonlinear Sciences., 2021; 6(1): 1–8 RahamanH. Kamrul HasanM. AliA. Shamsul AlamM. Implicit Methods for Numerical Solution of Singular Initial Value Problems Applied Mathematics and Nonlinear Sciences 2021 6 1 1 8 10.2478/amns.2020.2.00001 Search in Google Scholar

Huang, L., Chen, W., Liu, Y., Zhang, H., & Qu, H. Improving neural machine translation using gated state network and focal adaptive attention networtk. Neural Computing and Applications., 2021; 33(23): 15955–15967 HuangL. ChenW. LiuY. ZhangH. QuH. Improving neural machine translation using gated state network and focal adaptive attention networtk Neural Computing and Applications 2021 33 23 15955 15967 10.1007/s00521-021-06444-2 Search in Google Scholar

Dai, B. Research on Chinese and English language information retrieval algorithm based on bilingual theme model. Cluster Computing., 2019; 22(2): 3681–3688 DaiB. Research on Chinese and English language information retrieval algorithm based on bilingual theme model Cluster Computing 2019 22 2 3681 3688 10.1007/s10586-018-2218-8 Search in Google Scholar

Yin, Y., Su, J., Wen, H., Zeng, J., Liu, Y., & Chen, Y. POS tag-enhanced coarse-to-fine attention for neural machine translation. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)., 2019; 18(4): 1–14 YinY. SuJ. WenH. ZengJ. LiuY. ChenY. POS tag-enhanced coarse-to-fine attention for neural machine translation ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP) 2019 18 4 1 14 10.1145/3321124 Search in Google Scholar

Khan, F. A., & Abubakar, A. Machine translation in natural language processing by implementing artificial neural network modelling techniques: An analysis. International Journal on Perceptive and Cognitive Computing., 2020; 6(1): 9–18 KhanF. A. AbubakarA. Machine translation in natural language processing by implementing artificial neural network modelling techniques: An analysis International Journal on Perceptive and Cognitive Computing 2020 6 1 9 18 Search in Google Scholar

Polecane artykuły z Trend MD

Zaplanuj zdalną konferencję ze Sciendo