1. bookAHEAD OF PRINT
Detalles de la revista
License
Formato
Revista
eISSN
2444-8656
Primera edición
01 Jan 2016
Calendario de la edición
2 veces al año
Idiomas
Inglés
Acceso abierto

Application of Linear Partial Differential Equation Theory in Guiding Football Scientific Training

Publicado en línea: 15 Jul 2022
Volumen & Edición: AHEAD OF PRINT
Páginas: -
Recibido: 06 Mar 2022
Aceptado: 08 May 2022
Detalles de la revista
License
Formato
Revista
eISSN
2444-8656
Primera edición
01 Jan 2016
Calendario de la edición
2 veces al año
Idiomas
Inglés
Introduction

With the continuous improvement of Chinese economic level, the sports industry is booming. The development trend of modern football has gradually shown the integration of global football. How we use corner kicks effectively is also on the agenda. The application of corner kick tactics has an important influence on football players’ corner kick goals. Among them, tactical coordination methods are widely used by football players in corner kicks [1]. This makes it difficult to accurately judge the position information of the ball during the corner kick. In this case, how to effectively extract the trajectory of the corner kick has become a fundamental problem to be solved urgently in the modern sports field at this stage. The extraction method of corner kick video image trajectory based on Kalman filter theory can combine Kalman filter extraction and linear difference theory [2]. This method is the fundamental way to solve the above problems. Therefore, relevant experts and scholars have highly valued this model in this field. We propose a Kalman filter-based method for extracting trajectories from video images of football players’ corner kicks. The experimental results show that the proposed method can better extract the movement trajectories of football players in the process of corner kicks. The extraction efficiency of our proposed method is high.

The extraction principle of the video image trajectory of the football player's corner kick

We first give the recursive form of the sliding window kernel ridge regression theory under the condition of Gaussian kernel function [3]. Then we extract the football trajectory during the football player's corner kick in the form of a sliding window. We take each frame's football trajectory in the neighboring frames as input. In this paper, the position of the next frame of the football trajectory is extracted based on the kernel-ridge regression theory. From this, we complete the extraction of the video image trajectory of the football player's corner kick. The specific process is as follows:

Dynamic structural equations describe the trajectory of a football player's corner kick shot. The current position of the football is related to the work of the football's nearest D frame (D ≥ 1). We use formula (1) to express {Xt=fx(XtD,,Xt1)Yt=fY(YtD,,Yt1);D1 \left\{{\matrix{{{X_t} = {f_x}\left({{X_{t - D}}, \ldots ,{X_{t - 1}}} \right)} \hfill \cr {{Y_t} = {f_Y}\left({{Y_{t - D}}, \ldots ,{Y_{t - 1}}} \right)} \hfill \cr} ;D \ge 1} \right.

Xt and Yt represent the parameters of the dynamic structural equation. We take the X direction as the benchmark [4]. The input of the t frame in the formula (1) in this paper is Xt=[XtD,,Xt1] {\vec X_t} = \left[{{X_{t - D}}, \ldots ,{X_{t - 1}}} \right] . We turn the inner product in this space into the computation of the kernel function. This paper uses the formula (2) to express <φ(Xt),φ(Xt)>κ(X1,X2) < \varphi \left({{{\vec X}_t}} \right),\varphi \left({{{\vec X}_t}} \right) > \kappa \left({{{\vec X}_1},{{\vec X}_2}} \right)

In the high-dimensional feature space F we transform Xt=fx(Xt) {X_t} = {f_x}\left({{{\vec X}_t}} \right) into a linear function: Xt=ωTφ(Xt)+b {X_t} = {\omega^T}\varphi \left({{{\vec X}_t}} \right) + b ω and b represent the corresponding gain and offset coefficients. Suppose the input of the most recent N frames of the football player corner kick image is mapped to φ=[φ(XtN+1)T,,φ(Xt1)T,φ(Xt)T]T \varphi = {\left[{\varphi {{\left({{{\vec X}_{t - N + 1}}} \right)}^T}, \ldots ,\varphi {{\left({{{\vec X}_{t - 1}}} \right)}^T},\,\varphi {{\left({{{\vec X}_t}} \right)}^T}} \right]^T} respectively. We obtain ω^ \hat \omega , b^ \hat b by performing a ridge regression estimate on the coefficients in terms of X = [XtN+1,…, Xt−1, Xt]T. Assuming ek=Xkω^Tφ(Xk)b^ {e_k} = {X_k} - {\hat \omega^T}\varphi \left({{{\vec X}_k}} \right) - \hat b , we set the cost function J according to regression estimation theory. We use formula (4) to express J=λ|ω^2+k=tN+1tek2 J = {\left. {\lambda \left| {\hat \omega} \right.} \right\|^2} + \sum\limits_{k = t - N + 1}^t {e_k^2}

The football trajectory is under the constraints of et=Xtω^φ(Xt)b^ {e_t} = {X_t} - \hat \omega \varphi \left({{{\vec X}_t}} \right) - \hat b which order the cost function J to be the minimum. We introduce Lagrangian conditions and bring in the multiplier factor α = [αtN+1,…, αt−1, αt]T. We use equation (5) to express the final cost function J : J=λ|ω^2+k=tN+1tεk2+k=tN+1tαk(Xkω^Tφ(Xk)b^ek) J = \lambda {\left. {\left| {\hat \omega} \right.} \right\|^2} + \sum\limits_{k = t - N + 1}^t {\varepsilon_k^2 + \sum\limits_{k = t - N + 1}^t {{\alpha_k}\left({{X_k} - {{\hat \omega}^T}\varphi \left({{{\vec X}_k}} \right) - \hat b - {e_k}} \right)}}

Based on the estimated ω^ \hat \omega and b^ \hat b in the first frame of the football player's corner kick image, we can extract the position of the football trajectory in the t +1 frame. We use formula (6) to calculate its extraction value Xt+1=φ(Xt+1)Tω^+b^=CTA1(Xb^1)+b^ {\vec X_{t + 1}} = \varphi {\left({{{\vec X}_{t + 1}}} \right)^T}\hat \omega + \hat b = {C^T}{A^{- 1}}\left({X - \hat b1} \right) + \hat b

In C=[κ(XtN+1,Xt+1)κ(Xt,Xt+1)]T C = {\left[{\kappa \left({{{\vec X}_{t - N + 1}},\,{{\vec X}_{t + 1}}} \right) \ldots \kappa \left({{{\vec X}_t},{{\vec X}_{t + 1}}} \right)} \right]^T}

Kalman filter-based corner kick video image trajectory extraction method
Soccer target feature extraction

In extracting the trajectory of the football player's corner kick video image, we first use the particle filter as the trajectory tracking framework of the football player's corner kick. This paper uses visual features to obtain ball candidates for each frame of images during a football player's corner kick. In this paper, the particle filter theory is used to extract and track the football player's corner kick trajectory to form the initial course [5]. The specific process is as follows:

M represents the number of pixels in the football target area during the football player's corner kick. k (·) stands for Gaussian kernel function. Define b(li): R2 → {1, 2, …, U} as a quantized function. It represents the mapping of the gray value at position li of the football mesh to U quantization levels. The single color channel of the football target area follows the grayscale distribution. We use formula (8) to express pN={pN(u)}u=1,2,,U {p_N} = {\left\{{p_N^{\left(u \right)}} \right\}_{u = 1,2, \ldots ,U}}

Where N = {R, G, B} represents each color channel. We use equation (9) to give the value of the u interval pN(u)=i=1Mk(lclih)δ[b(li)u] p_N^{\left(u \right)} = \sum\limits_{i = 1}^M {k\left({\left\| {{{{l_c} - {l_i}} \over h}} \right\|} \right)} \delta \left[{b\left({{l_i}} \right) - u} \right] l represents the center position of the football target area during the football player's corner kick. h represents the tracking window size. δ (·) stands for Delta function. It is used to determine whether the gray value at the position of the football target li is within the interval u. We combine the gray distribution of the three color channels and use the normalization constant F for normalization to obtain the color characteristics of the football goal in the process of the football player's corner kick. pcolor=C×{pR,pG,pB}={p(u)}u=1,,U,,3U {p_{color}} = C \times \left\{{{p_R},\,{p_G},\,{p_B}} \right\} = \left\{{{p^{\left(u \right)}}} \right\}u = 1, \ldots ,U, \ldots ,3U C=1/3×i=1Mk(lcli/h) C = 1/3 \times \sum\limits_{i = 1}^M {k\left({\left\| {{l_c} - {l_i}} \right\|/h} \right)}

We have pixel I(x, y) of the football target area during a football player's corner kick. Calculate the horizontal gradient Gx and vertical gradient Gy of the pixel. {Gx(x,y)=I(x+1,y)I(x1,y)Gy(x,y)=I(x,y+1)I(x,y1) \left\{{\matrix{{{G_x}\left({x,y} \right) = I\left({x + 1,y} \right) - I\left({x - 1,y} \right)} \hfill \cr {{G_y}\left({x,y} \right) = I\left({x,y + 1} \right) - I\left({x,y - 1} \right)} \hfill \cr}} \right.

Then the modulus value ρ (x, y) and direction θ (x, y) of the direction gradient at point (x, y) are expressed by formula (13) {ρ(x,y)=Gx2(x,y)+Gy2(x,y)θ(x,y)=arctan[Gy(x,y)/Gx(x,y)] \left\{{\matrix{{\rho \left({x,y} \right) = \sqrt {G_x^2\left({x,y} \right) + G_y^2\left({x,y} \right)}} \hfill \cr {\theta \left({x,y} \right) = \arctan \left[{{G_y}\left({x,y} \right)/{G_x}\left({x,y} \right)} \right]} \hfill \cr}} \right.

We can obtain the HOG feature vector of the football target area in the football player's corner kick process based on the gradient modulus [6]. We use particle filtering to extract corner kick trajectories of football players. We use the state transition equation and observation equation to describe as: {xk=fk(xk1,ωk1)zk=hk(xk,nk) \left\{{\matrix{{{x_k} = {f_k}\left({{x_{k - 1}},\,{\omega_{k - 1}}} \right)} \hfill \cr {{z_k} = {h_k}\left({{x_k},\,{n_k}} \right)} \hfill \cr}} \right.

xk represents k and the system state vector in the time domain. zk represents the observation vector of the unknown system in the k time domain. fk represents the state transition function. hk stands for the system observation function. ωk and nk represent process noise and observation noise, respectively [7]. Suppose the football goal state xk obeys Markov p(xk | xk−1) during the football player's corner kick. Initialize the probability density function p(x0 | z0) = p(x0) as known. Then the football target probability density function p(xk | z1k) can be obtained recursively according to the two stages of extraction and updated in corner kick shooting.

Processing and connection of football player's corner kick trajectory

We take the initial trajectory of the football player's corner kick as the basis for extracting the video image trajectory of the football player's corner kick. We select and confirm the accurate ball trajectory of the football player's corner kick by selecting the football player's corner kick trajectory [8]. On this basis, we combine Kalman filter extraction and linear difference theory to fill in the ball's position that is missed in each frame of the football player's corner kick. From this, we complete the extraction of the video image trajectory of the football player's corner kick. The tooling process is as follows:

Cf represents the set of ball trajectories during a football player's corner kick. We initialize collection Cf. We consider all trajectories as elements in the set Cf, then we have Cf={Ti,i=1,2,,N} {C_f} = \left\{{{T_i},i = 1,2, \ldots ,N} \right\}

Ti represents the i track in the current football target area. N represents the total number of channels in the current football target area. We select two trajectories Tu and Tv of the football target area during the football player's corner kick. When the two football trajectories intersect during the football player's corner kick shooting, we select the longer trajectory length as the actual football player's corner kick shooting trajectory. We use formula (16) to express Cf={Cf{Tu},ifLu<LvTuTvCf{Tv},ifLuLvTuTv {C_f} = \left\{{\matrix{{{C_f} - \left\{{{T_u}} \right\},\,if\,{L_u} < {L_v} \wedge {T_u} \cap {T_v}} \hfill \cr {{C_f} - \left\{{{T_v}} \right\},\,if\,{L_u} \ge {L_v} \wedge {T_u} \cap {T_v}} \hfill \cr}} \right.

In the formula {Lu=Kmax,uKmin,uLv=Kmax,vKmin,v \left\{{\matrix{{{L_u} = {K_{\max ,u}} - {K_{\min ,u}}} \hfill \cr {{L_v} = {K_{\max ,v}} - {K_{\min ,v}}} \hfill \cr}} \right.

We base on the set of corner kick soccer trajectories obtained by Equation (16). This paper combines Kalman filter and linear interpolation theory to fill the missed frames between two corner kick soccer circuits [9]. Assume that there are two soccer goal trajectories Tu and Tv during the football player's corner kick. Where Kmax,u < Kmin,v. We first combine the Kalman filter to obtain the extracted values of the football target track Tu and Tv in the interval [Kmax,u < Kmin,v]. We use p^k,u {\hat p_{k,u}} and p^k,v {\hat p_{k,v}} for representation. Where k ∈ [Kmax,u < Kmin,v]. Then we find the two points in the extraction interval of the football player's corner kick when the distance between the two football trajectories is the closest. They correspond to the a frame on the trajectory Tu and the b edge on the Tv respectively, then we have (a,b)=argmindist(p^a,u,p^b,v) \left({a,b} \right) = {\rm argmin} \,dist\left({{{\hat p}_{a,u}},{{\hat p}_{b,v}}} \right)

Where: {dist(p^a,u,p^b,v)=(x^a,ux^b,v)2+(y^a,uy^b,v)2p^a,u=(x^a,uy^a,u),p^b,v=(x^b,v,y^b,v) \left\{{\matrix{{dist\left({{{\hat p}_{a,u}},\,{{\hat p}_{b,v}}} \right) = \sqrt {{{\left({{{\hat x}_{a,u}} - {{\hat x}_{b,v}}} \right)}^2} + {{\left({{{\hat y}_{a,u}} - {{\hat y}_{b,v}}} \right)}^2}}} \hfill \cr {{{\hat p}_{a,u}} = \left({{{\hat x}_{a,u}} - {{\hat y}_{a,u}}} \right),\,{{\hat p}_{b,v}} = \left({{{\hat x}_{b,v}},{{\hat y}_{b,v}}} \right)} \hfill \cr}} \right.

We can obtain the values of a and b based on the solution of Eq. (19). The position of the missed ball during the football player's corner kick after the frame a is described by the extracted value of the trajectory Tv. The function of the soccer ball in the structure a is the average of the values extracted for the shot trajectories Tu and Tv in that frame. We use formula (20) to calculate pk={p^k,uKmax,uk<a(p^k,u,p^k,v)/2k=ap^k,va<k<Kmin,v {p_k} = \left\{{\matrix{{{{\hat p}_{k,u}}} & {{K_{\max ,u}} \le k < a} \cr {\left({{{\hat p}_{k,u}},{{\hat p}_{k,v}}} \right)/2} & {k = a} \cr {{{\hat p}_{k,v}}} & {a < k < {K_{\min ,v}}} \cr}} \right.

When a is less than b, we can use the linear interpolation method to obtain a more accurate position of the football target in the football player's corner kick. pk={p^k,uKmax,uk<a(ka)(p^b,v+p^a,u)/2a<k<bp^k,vakKmin,v {p_k} = \left\{{\matrix{{{{\hat p}_{k,u}}} & {{K_{\max ,u}} \le k < a} \cr {\left({k - a} \right)\left({{{\hat p}_{b,v}} + {{\hat p}_{a,u}}} \right)/2} & {a < k < b} \cr {{{\hat p}_{k,v}}} & {a \le k \le {K_{\min ,v}}} \cr}} \right.

We can accurately fill the missed ball positions between a football player's corner kick trajectories. This constitutes a complete football player's corner kick trajectory.

Experimental Results and Analysis

The following experiments demonstrate the effectiveness of the method proposed in this paper for extracting the trajectory of a football player's corner kick video image based on Kalman filtering. In the Madab7.0 environment, we build an experimental platform to remove the video image trajectory of a football player's corner kick. The experimental data are taken from the Premier League match between Liverpool and Watford in the 2019–2020 season [10]. The resolution is 720×404. We use the Kalman filter method and the least-squares method to extract the trajectory of the video image of the football player's corner kick. Due to the influence of noise, we conducted 30 experiments on the two sets of simulated trajectories, respectively. We compare the average mean square error (%), and root means square error (%) of the two methods for extracting the video image trajectory of the football player's corner kick. The average mean square error and root mean square error are calculated as MMSE=21150×Kn=150t=1K(X^tnXtn) MMSE = {{21} \over {150 \times K}}\sum\limits_{n = 1}^{50} {\sum\limits_{t = 1}^K {\left({\hat X_t^n - X_t^n} \right)}} MMSE=1Kt=1K(X^tXt)2 MMSE = {1 \over K}\sum\limits_{t = 1}^K {{{\left({{{\hat X}_t} - {X_t}} \right)}^2}}

Where X^tn \hat X_t^n represents the extracted value of frame t in the n experiment. X^t {\hat X_t}{\hat X_t} represents the extracted value of the structure t. We use the comparison results to measure the extraction error of the two methods [11]. We show the statistical results under 30 extraction experiments of the two methods in Table 1 and Table 2.

Statistical results of video image trajectory extraction error statistics by Kalman filtering method

Number of experiments/time Mean Square Error/% RMSE/%
5 3.21 4.21
10 3.09 3.79
15 2.81 2.44
20 2.44 2.21
25 1.94 2.1
30 1.01 0.94

Statistical results of video image trajectory extraction error statistics by least-squares method

Number of experiments/time Mean Square Error/% RMSE/%
5 4.21 48.21
10 6.32 38.12
15 4.12 24.44
20 4.98 20.34
25 4.21 19.44
30 3.98 17.64

The extraction errors of the Kalman filter method are all smaller than the error of the least-squares method for extracting the video image trajectory of the football player's corner kick [12]. This is mainly because the Kalman filter method confirms the accurate ball trajectory of the football player's corner kick by selecting the football player's corner kick trajectory. On this basis, the Kalman filter extraction and linear difference theory are combined to fill in the ball's position that is missed in each frame of the football player's corner kick. Thus, the video image trajectory of the football player's corner kick is extracted. This makes the extraction error of the video image trajectory of the football player's corner kick shot by the Kalman filter method smaller.

We use the Kalman filter method and the least-squares method to extract the trajectory of the football player's corner kick [13]. We compare the two strategies for the extraction efficiency (%) of the video image trajectory of the football player's corner kick. The comparison results are shown in FIG. 1.

Figure 1

Comparison of a trajectory extraction efficiency of corner kick video images using Kalman filter and least-squares method

The extraction efficiency of the video image trajectory of a football player's corner kick using the least-squares method is lower than that of the Kalman filter method. This is mainly because the Kalman filter method first uses visual features to obtain the candidate balls of each image frame during the football player's corner kick. On this basis, the Kalman filter extraction and linear difference theory are combined to fill in the ball's position that is missed in each frame of the football player's corner kick. In this way, the video image trajectory of the football player's corner kick is extracted. In this way, the Kalman filtering method has high efficiency in removing the rotation of the video image of the football player's corner kick.

Conclusion

When the current method extracts the trajectory of the video image of the corner kick of the football player, there is a problem of a significant error in the extraction of the video image trajectory of the corner kick. This paper proposes a Kalman filter-based method for extracting the soccer player's corner kick video image trajectory. The experimental results show that the proposed method can better extract the football player's corner kick trajectory and the extraction efficiency is high.

Figure 1

Comparison of a trajectory extraction efficiency of corner kick video images using Kalman filter and least-squares method
Comparison of a trajectory extraction efficiency of corner kick video images using Kalman filter and least-squares method

Statistical results of video image trajectory extraction error statistics by least-squares method

Number of experiments/time Mean Square Error/% RMSE/%
5 4.21 48.21
10 6.32 38.12
15 4.12 24.44
20 4.98 20.34
25 4.21 19.44
30 3.98 17.64

Statistical results of video image trajectory extraction error statistics by Kalman filtering method

Number of experiments/time Mean Square Error/% RMSE/%
5 3.21 4.21
10 3.09 3.79
15 2.81 2.44
20 2.44 2.21
25 1.94 2.1
30 1.01 0.94

Chen, S., Liang, L., Ouyang, J., & Yuan, Y. Accurate 3D motion tracking by combining image alignment and feature matching. Multimedia Tools and Applications., 2020; 79(29): 21325–21343 ChenS. LiangL. OuyangJ. YuanY. Accurate 3D motion tracking by combining image alignment and feature matching Multimedia Tools and Applications 2020 79 29 21325 21343 10.1007/s11042-020-08966-8 Search in Google Scholar

Wu, W., Xu, M., Liang, Q., Mei, L., & Peng, Y. Multi-camera 3D ball tracking framework for sports video. IET Image Processing., 2020; 14(15): 3751–3761 WuW. XuM. LiangQ. MeiL. PengY. Multi-camera 3D ball tracking framework for sports video IET Image Processing 2020 14 15 3751 3761 10.1049/iet-ipr.2020.0757 Search in Google Scholar

Nayak, R. J., & Chaudhari, J. P. Object Tracking Using Dominant Sub Bands in Steerable Pyramid Domain. International Journal on Information Technologies and Security., 2020; 12(1): 61–74 NayakR. J. ChaudhariJ. P. Object Tracking Using Dominant Sub Bands in Steerable Pyramid Domain International Journal on Information Technologies and Security 2020 12 1 61 74 Search in Google Scholar

count Abulwafa, A. E., Saleh, A. I., Ali, H. A., & Saraya, M. S. A fog based ball tracking (FB2T) system using intelligent ball bees. Journal of Ambient Intelligence and Humanized Computing., 2020; 11(11): 5735–5754 count AbulwafaA. E. SalehA. I. AliH. A. SarayaM. S. A fog based ball tracking (FB2T) system using intelligent ball bees Journal of Ambient Intelligence and Humanized Computing 2020 11 11 5735 5754 10.1007/s12652-020-01948-6 Search in Google Scholar

Zheng, Y., Zeng, Q., Lv, C., Yu, H., & Ou, B. Mobile Robot Integrated Navigation Algorithm Based on Template Matching VO/IMU/UWB. IEEE Sensors Journal., 2021; 21(24): 27957–27966 ZhengY. ZengQ. LvC. YuH. OuB. Mobile Robot Integrated Navigation Algorithm Based on Template Matching VO/IMU/UWB IEEE Sensors Journal 2021 21 24 27957 27966 10.1109/JSEN.2021.3122947 Search in Google Scholar

Teng, Y., Yang, S., Huang, Y. & Barker, N. Research on space optimization of historic blocks on Jiangnan from the perspective of place construction. Applied Mathematics and Nonlinear Sciences., 2021; 6(1): 201–210 TengY. YangS. HuangY. BarkerN. Research on space optimization of historic blocks on Jiangnan from the perspective of place construction Applied Mathematics and Nonlinear Sciences 2021 6 1 201 210 10.2478/amns.2021.1.00019 Search in Google Scholar

Aghili, A. Complete Solution For The Time Fractional Diffusion Problem With Mixed Boundary Conditions by Operational Method. Applied Mathematics and Nonlinear Sciences., 2021; 6(1): 9–20 AghiliA. Complete Solution For The Time Fractional Diffusion Problem With Mixed Boundary Conditions by Operational Method Applied Mathematics and Nonlinear Sciences 2021 6 1 9 20 10.2478/amns.2020.2.00002 Search in Google Scholar

Cuevas, C., Quilon, D., & García, N. Techniques and applications for soccer video analysis: A survey. Multimedia Tools and Applications., 2020; 79(39): 29685–29721 CuevasC. QuilonD. GarcíaN. Techniques and applications for soccer video analysis: A survey Multimedia Tools and Applications 2020 79 39 29685 29721 10.1007/s11042-020-09409-0 Search in Google Scholar

Rana, M., & Mittal, V. Wearable sensors for real-time kinematics analysis in sports: a review. IEEE Sensors Journal., 2020; 21(2): 1187–1207 RanaM. MittalV. Wearable sensors for real-time kinematics analysis in sports: a review IEEE Sensors Journal 2020 21 2 1187 1207 10.1109/JSEN.2020.3019016 Search in Google Scholar

Waldron, M., Harding, J., Barrett, S., & Gray, A. A new foot-mounted inertial measurement system in soccer: reliability and comparison to global positioning systems for velocity measurements during team sport actions. Journal of Human Kinetics., 2021;77(1): 37–50 WaldronM. HardingJ. BarrettS. GrayA. A new foot-mounted inertial measurement system in soccer: reliability and comparison to global positioning systems for velocity measurements during team sport actions Journal of Human Kinetics 2021 77 1 37 50 10.2478/hukin-2021-0010800831334168690 Search in Google Scholar

Walia, G. S., Kumar, A., Saxena, A., Sharma, K., & Singh, K. Robust object tracking with crow search optimized multi-cue particle filter. Pattern Analysis and Applications., 2020; 23(3): 1439–1455 WaliaG. S. KumarA. SaxenaA. SharmaK. SinghK. Robust object tracking with crow search optimized multi-cue particle filter Pattern Analysis and Applications 2020 23 3 1439 1455 10.1007/s10044-019-00847-7 Search in Google Scholar

Shevtsova, I. G., Navolotskii, A. A., Eremich, N. A., & Shestakov, M. P. Way of Assessing an Athlete's Upright Posture Control while Performing Tracking Movements. Moscow University Computational Mathematics and Cybernetics., 2020; 44(4): 203–217 ShevtsovaI. G. NavolotskiiA. A. EremichN. A. ShestakovM. P. Way of Assessing an Athlete's Upright Posture Control while Performing Tracking Movements Moscow University Computational Mathematics and Cybernetics 2020 44 4 203 217 10.3103/S0278641920040056 Search in Google Scholar

Ming, Y., & Zhang, Y. Efficient scalable spatiotemporal visual tracking based on recurrent neural networks. Multimedia Tools and Applications., 2020; 79(3): 2239–2261 MingY. ZhangY. Efficient scalable spatiotemporal visual tracking based on recurrent neural networks Multimedia Tools and Applications 2020 79 3 2239 2261 10.1007/s11042-019-08331-4 Search in Google Scholar

Artículos recomendados de Trend MD

Planifique su conferencia remota con Sciendo