1. bookAHEAD OF PRINT
Détails du magazine
License
Format
Magazine
eISSN
2444-8656
Première parution
01 Jan 2016
Périodicité
2 fois par an
Langues
Anglais
Accès libre

High Simulation Reconstruction of Crowd Animation Based on Optical Flow Constraint Equation

Publié en ligne: 15 Jul 2022
Volume & Edition: AHEAD OF PRINT
Pages: -
Reçu: 10 Apr 2022
Accepté: 16 Jun 2022
Détails du magazine
License
Format
Magazine
eISSN
2444-8656
Première parution
01 Jan 2016
Périodicité
2 fois par an
Langues
Anglais
Introduction

As modern society continues to evolve, artificial intelligence technology is also advancing rapidly. At present, most of the domestic population has a high dependence on intelligent digital cameras and can adapt to changes in environmental needs and other characteristics; however, at present, China's existing hardware equipment there are a number of problems: such as poor stability and low anti-interference ability and can not meet the current scene of the application function, which leads to most of the scenes in China's camera quality is poor, poor stability, and this can not meet user needs; In addition, part of the crowd animation application function is single, it does not have stable motion pose as well as motion rate and other characteristics. Based on the above problems[1,2]. This paper proposes a new concept and method to model the robot 3D environment. Firstly, the existing optical flow constraint equation model is analyzed and summarized and combined with the actual situation to give two mainstream design ideas: the first is to use multi-objective optimization algorithm (M-HMM) to achieve the quality of the camera in the simulated scene on the basis of the original; the second is to use mobile video screen technology to achieve the control of the camera motion rate in the simulated scene[3].

The optical flow constraint equation describes a two-dimensional plane formed in a three-dimensional space in a mobile table column, illuminated by a certain number of light sources, so that light waves are reflected from a certain place to another point[4].

There is a certain connection between the illumination and the motion field. Therefore, the control function can be used to simulate factors such as the direction and position of light propagation and the angle of solar radiation, so as to model and analyze the object and study its trajectory change law; control variables can also be used. After the mathematical model is established by the method and the equation is converted into an algebraic expression, it can be directly applied to other fields, such as laser interference and kinematics simulation. Based on the animation model of the optical flow constraint equation, researchers can conduct a deeper study: establish the mathematical model by the control variable method and the graph under certain influence conditions on its parameters to simulate the real scene, and transform the image into a functional relationship; analyze the regularity according to the experimental results; finally use MATLAB software to generate experimental data and graphs[5].

At present, there are many ways to construct animation models based on crowd motion. One of them builds a 2D scene based on optical flow constraint equations. In this way, all elements in the entire video picture can be combined, and then the overall effect can be evaluated and analyzed, so as to achieve the expected function, and finally obtain a complete and excellent target animation work with visual characteristics. In addition, there is another way of thinking to determine the relationship between each area according to the motion law and motion speed corresponding to each element in the scene, and finally complete the design of the entire animation scene[6]. This method can synthesize the required research content and reduce the difficulty of model modeling to a certain extent. At the same time, it can also have a clear understanding of the motion area and spatial position[7].

A high-fidelity realistic rendering method for crowd animation
High simulation realistic rendering process

The process of high fidelity drawing is as follows: firstly, the researcher needs to understand the user's history of the occurrence and evolution of light flow events and human visual psychological reaction to the scene; secondly, the simulation design is carried out by building a model to observe what kind of picture effect it produces in different environments. Finally, based on the changes in the scene and the results of human brain processing, a conclusion is drawn that the application has some feasibility. The specific flow chart is shown in Figure 1.

Figure 1

Highly imitated realistic drawing process

Visualization of crowd animation based on optical flow

In traditional crowd animation, two-dimensional drawings and text are mostly used. Now, the 3D space modeling technology is applied in the intelligent scene based on the optical flow constraint equation model. The workflow of this method is shown in Figure 2. It uses the computer to simulate the interaction between people, objects or other objects to generate information transmission paths to analyze the entire scene. We need to deal with a large amount of data and graphics and other complex interface information and difficult to directly obtain a complete and realistic animation screen effect, so in the post-production process is usually used in combination with laser synthesis method and optical flow constraint equation model reconstruction algorithm to use, so as to obtain the complete scene animation effect[8].

Figure 2

3-D spatial modeling process

Crowd animation simulation algorithm based on optical flow constraint equation
Crowd animation information acquisition model

In a complete crowd animation, information acquisition plays a crucial role. It is the most important part of the whole system operation and data processing process. It not only includes the decision making after analyzing the scene, character model and the surrounding environment, it also contains the information related to the content that the user is interested in; at the same time, there will be some other aspects such as scene characteristics and background, time change, etc., which need to be expressed through these contents or can be shown visually to the audience group of crowd animation. This will make it easier to get feedback and understanding from people[9]. This is an integral part of the whole system. The current flow chart of animation information collection is shown in Figure 3.

Figure 3

Animation information collection flow chart

The information of the collected population data can be used in equations (1) and (2): u(x,y;t)=G(x,y;t) u\left( {x,y;t} \right) = G\left( {x,y;\,t} \right) p(x,t)=limΔx0[ σu(u+Δu)Δx ]=σu(x,t)x p\left( {x,t} \right) = \mathop {\lim }\limits_{\Delta x \to 0} \left[ {\sigma {{u - \left( {u + \Delta u} \right)} \over {\Delta x}}} \right] = - \sigma {{\partial u\left( {x,\,t} \right)} \over {\partial x}}

The equations for constructing the crowd information are shown in equations (3) and (4). s(k)=ϕs(k1)+w(k) s\left( k \right) = \phi \cdot s\left( {k - 1} \right) + w\left( k \right) ϕ=(1000001100001000001100001),w(k)(R(0,σθ(k))0G(0,σθ(k))0B(0,σθ(k))) \phi = \left( {\matrix{ 1 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \cr 0 \hfill & 1 \hfill & 1 \hfill & 0 \hfill & 0 \hfill \cr 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill \cr 0 \hfill & 0 \hfill & 0 \hfill & 1 \hfill & 1 \hfill \cr 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 1 \hfill \cr } } \right),\,w\left( k \right)\left( {\matrix{ {R\left( {0,\,{\sigma _{\theta \left( k \right)}}} \right)} \hfill \cr 0 \hfill \cr {G\left( {0,\,{\sigma _{\theta \left( k \right)}}} \right)} \hfill \cr 0 \hfill \cr {B\left( {0,\,{\sigma _{\theta \left( k \right)}}} \right)} \hfill \cr } } \right)

The distribution map of the population images can be shown in Equation (5). f=<f,dγ0>dγ0+Rf f = < f,{d_{{\gamma _0}}} > {d_{{\gamma _0}}} + {R_f}

According to the above analysis, the frame scan technique is used to classify the feature domain of the crowd image information, and the template matching function f(gi) for correlation detection of the crowd image information is constructed as shown in Equation (6). f(gi)=c1λ˜ij=0Nnpρjυij| υij |σ1+ε/j=0Nnpρj| υij |σ1+ε f\left( {{g_i}} \right) = {c_1}{\tilde \lambda _i}\sum\limits_{j = 0}^{{N_{np}}} {{{{\rho _j}{{\vec \upsilon }_{ij}}} \over {{{\left| {{{\vec \upsilon }_{ij}}} \right|}^{{\sigma _1}}} + \varepsilon }}/} \sum\limits_{j = 0}^{{N_{np}}} {{{{\rho _j}} \over {{{\left| {{{\vec \upsilon }_{ij}}} \right|}^{{\sigma _1}}} + \varepsilon }}}

In the above way, we can obtain the template matching function of the multidimensional dynamic feature domain of the crowd image information and carry out the crowd image information tracking and acquisition.

Through the above method, the template matching function of the multi-dimensional dynamic feature domain of the crowd image information can be obtained, and the crowd image information can be tracked and collected.

Optical flow constraint equation

In practical applications, optical flow can be divided into different types. Such as: active, passive and random noise. The crowd has high requirements on the speed of movement, so in order to ensure the scene environment is more complete, safe and stable, it is necessary to establish a group that conforms to the characteristics of the group, so that the optical flow operation can be completed in a short time and the result data can be obtained to evaluate its feasibility., which can provide a basis for later design and production[10]. The schematic diagram of optical flow constraint is shown in Figure 4.

Figure 4

Optical flow constraint schematic

Assuming that there is a quality point m on the collected video image, formula (7) is used to express its properties: m=(x,y)T m = {\left( {x,y} \right)^T}

The grayscale value of m at time t is represented by I(x, y, t), where x and y are represented as the space vector of m. The grayscale value of m at time t is represented by I(x, y, t). Then, at the moment t + dt, its grayscale value is I(x + dx, y + dy, t + dt), so that when d tends to zero value, it is as shown in Equation (8). I(x+dx,y+dx,t+dt)=I(x,y,t) I\left( {x + dx,\,y + dx,\,t + dt} \right) = I\left( {x,\,y,\,t} \right)

Equation (9) can be obtained by the above test. I(x+dx,y+dy,t+dt)=I(x,y,t)+Ixdx+Iydy+Itdt+ε \matrix{ {I\left( {x + dx,\,y + dy,\,t + dt} \right)} \hfill \cr { = I\left( {x,y,t} \right) + {{\partial I} \over {\partial x}}dx + {{\partial I} \over {\partial y}}dy + {{\partial I} \over {\partial t}}dt + \varepsilon } \hfill \cr } where s denotes the second-order infinitesimal term, and when dt tends to 0, then Equation (10) can be obtained as follows: Ixdx+Iydy+Itdt=0 {{\partial I} \over {\partial x}}dx + {{\partial I} \over {\partial y}}dy + {{\partial I} \over {\partial t}}dt = 0

The partial derivative function of the grayscale values in the acquired population images in the case of x, y, t, as shown in Equation (11): Ixu+Iyv+It=0 {I_x}u + {I_y}v + {I_t} = 0

Then the above equation is expressed as a constant equation of the optical flow field, expressed in vectors as shown in Equation (12): Ivm+It=0 \nabla I \cdot {v_m} + {I_t} = 0

Horn-Schunck algorithm

Horn-Schunc is an optical flow-based algorithm that plays an important role in describing the interaction between light and objects in an animated scene. Simply put, it is a function to represent an animation, video or other scenario. This method can be a good solution for model construction and parameter setting. Heberip OS (system composition diagram shown in Figure 5) is a software system controlled by a computer written program, which is also known as a user interface (MISG) system. It is a computer-based software system.

Figure 5

Heberip operating system components

The mathematical model of Horn-Schunck algorithm can be carried out by the following formula. Assuming that in the collected crowd image sequence, it is expressed as I(x, y, t) by letters, the optical flow field that can be obtained is expressed as V(x,y) \vec V\left( {x,\,y} \right) by the formula. In other words, this can be equivalent to finding the two components of optical flow u(x, y) and v(x, y), then the energy function is defined by equation (13). E(u,v)=[ (Ixu+Iyv+It)2+α2( u 2+ v 2) ]dxdy \matrix{ {E\left( {u,\,v} \right) = } \hfill \cr {\int\!\!\!\int {\left[ {{{\left( {{I_x}u + {I_y}v + {I_t}} \right)}^2} + {\alpha ^2}\left( {{{\left\| {\nabla u} \right\|}^2} + {{\left\| {\nabla v} \right\|}^2}} \right)} \right]dxdy} } \hfill \cr }

The minimum value of optical flow field can be obtained by solving the extreme value of the above formula. The minimum value formula can be expressed by equations (14) and (15): LuxLuxyLuy=0 {{\partial L} \over {\partial u}} - {\partial \over {\partial x}}{{\partial L} \over {\partial {u_x}}} - {\partial \over {\partial y}}{{\partial L} \over {\partial {u_y}}} = 0 LvxLvxyLvy=0 {{\partial L} \over {\partial v}} - {\partial \over {\partial x}}{{\partial L} \over {\partial {v_x}}} - {\partial \over {\partial y}}{{\partial L} \over {\partial {v_y}}} = 0

In the above formula, there is a situation such as equation (16): L=(Ixu+Iyu+It)2+α2( u 2+ v 2) L = {\left( {{I_x}u + {I_y}u + {I_t}} \right)^2} + {\alpha ^2}\left( {{{\left\| {\nabla u} \right\|}^2} + {{\left\| {\nabla v} \right\|}^2}} \right)

After derivation of the above, equations (17) and (18) can be obtained: Ix(Ixu+Iyu+It)α2Δu=0 {I_x}\left( {{I_x}u + {I_y}u + {I_t}} \right) - {\alpha ^2}\Delta u = 0 Iy(Ixu+Iyu+It)α2Δv=0 {I_y}\left( {{I_x}u + {I_y}u + {I_t}} \right) - {\alpha ^2}\Delta v = 0

In the above formula, Δ is used to represent the Laplace operator, and the calculation equation of Δ is shown in equation (19): Δ=2x2+2y2 \Delta = {{{\partial ^2}} \over {\partial {x^2}}} + {{{\partial ^2}} \over {\partial {y^2}}}

The above identities can be transformed as shown in equations (20) and (21): (Ix2+α2)u+IxIyv=α2u¯IxIt \left( {I_x^2 + {\alpha ^2}} \right)u + {I_x}{I_y}v = {\alpha ^2}\bar u - {I_x}{I_t} (Ix2+α2)v+IxIyu=α2v¯IyIt \left( {I_x^2 + {\alpha ^2}} \right)v + {I_x}{I_y}u = {\alpha ^2}\bar v - {I_y}{I_t} where the values of ū and v can be solved using linear equations. Specifically, as shown in equations (22) and (23): uk+1=ukIx(Ixuk+Iyvk+It)α2+Ix2+Iy2 {u^{k + 1}} = {u^{ - k}} - {{{I_x}\left( {{I_x}{u^{ - k}} + {I_y}{v^{ - k}} + {I_t}} \right)} \over {{\alpha ^2} + I_x^2 + I_y^2}} vk+1=vkIx(Ixuk+Iyvk+It)α2+Ix2+Iy2 {v^{k + 1}} = {v^{ - k}} - {{{I_x}\left( {{I_x}{u^{ - k}} + {I_y}{v^{ - k}} + {I_t}} \right)} \over {{\alpha ^2} + I_x^2 + I_y^2}}

Crowd animation simulation results and analysis based on optical flow constraint
Crowd animation scene under optical flow constraint

Under the set light intensity, when people are in different positions, the light will be different. For example, when the lights are bright and dim at night, the human body will feel dizzy; at night, people will feel tired. Therefore, this visual environment and physiological phenomenon can be improved by adjusting the light, so as to achieve the best visual effect. This simulation mainly analyzes the crowd animation scene, establishes the experimental object through the optical flow model, uses the optical flow constraint equation to calculate its illumination intensity in the actual environment, and simulates the scene in the laboratory for testing. By verifying the error between the experimental test results and the theoretical value, and analyzing the reasons, the accuracy of the optical flow constraint equation in the crowd animation scene is finally determined[11].

Optical flow constraint processing

In our optical flow model reconstruction process, a series of processing is needed for the initial data. First, the original image is grayed out and the noise is removed by using filtering. Then it is converted to smooth and output to the subsequent frames; then the number of times the corresponding point of the pixel block is taken away and its position in different cycles is calculated using the step method, and the position coordinates of the pixel block are calculated. Finally, the reconstruction and synthesis of the target point and the surrounding environment brightness, shadows and other factors are carried out according to this trajectory, and a more restorative and effective reduction of the noise impact of a smaller degree of optical flow model is obtained.

Evaluation of crowd animation performance based on optical flow constraints

According to the above methods, we can get the following results:

After the model is optimized, the animation production cycle is reduced to a certain extent; and the cost is reduced to a certain extent; finally, after the model is optimized, the quality of its animation is greatly improved. Since the optical flow constraint equation adds part of the inverse noise. Therefore, it can be concluded that for the scene the state of motion matches the actual situation. While for other parameters without verification directly through the experiment to determine whether to meet the performance index of the application requirements, it is not possible to get the best results; so we must take into account the influence of noise in the experiment, that is, the scene model is reconstructed, rather than just rely on the parameters to calculate the results to be reconstructed. The optimization of the model improves the dynamics and the stability and visualization of the model, which is a better way for users to learn.

Experiment and result analysis
Parameter setting

The designed model in the initial state, optical flow data and acceleration parameters are 0, and its change pattern is also relatively stable compared with the original simulation. However, due to the formation time and propagation speed of the laser beam, the laser signal cannot be accurately predicted to the frequency of the current scene. Therefore, the actual situation needs to be adjusted and optimized to improve the simulation degree and accuracy, so as to adapt to different needs to get a model that meets the user's usage requirements; meanwhile, the initial parameter settings are verified to be reasonable through several tests at the later stage of this design, and the model parameter values are modified according to the experimental results.

According to the experimental data, the scene of the subject of this study is based on the model reconstruction under the optical flow constraint. The model is initialized before it is built. Then the laser scanner is used to complete the modeling. The distribution table between the template size and the number of correlation coefficients when the layout calculation is performed is shown in Table 1.

Template size and correlation coefficient distribution

Template size 4*5 12* 10 25*18 20*14 25*50
Setting parameters 0.345 0.675 0.636 0.487 0.543

According to the above parameter settings, the virtual object entity is constructed and the crowd information data is obtained, and the crowd animation effect is shown in Figure 6.

Figure 6

Crowd animation effect

Firstly, a complete prototype interface and database table are built; secondly, the required parameter values are obtained after the camera is fixed into three-dimensional space and recorded by laser interferometer; finally, a series of graphical information such as mapping relationship diagram between data and animation scene and motion curve diagram are established by MATLAB software. Through the study of the laser interferometer, the motion characteristics of the optical flow are analyzed, and the MATLAB software is used to establish a model based on the mapping relationship between the optical flow constraint equation and the experimental data, etc. After the above operations are completed, the simulation reconstruction can be performed. Finally, the curve corresponding to all parameter values and required parameters and related graphical information are obtained using high simulation; then the prototype interface effect is verified to reach the expected goal according to the stepper motor drive module and algorithm implementation function, and the problems arising during its application are studied accordingly and the reasons are analyzed, and feasible solutions are proposed.

Simulation of crowd animation scenes

Scene simulation refers to the simulation of a complete scene to verify whether the model is correct or not, rather than by simply using software. Here we mainly use the optical flow constraint equation and related optical theory. First, the basic parameters such as the corresponding light environment, camera trajectory map and animation effects are established before starting to create; then the camera model and size are selected to match the desired image with the laser camera; finally, the lighting is set and the luminance value of each pixel position in the scene is set to correspond to the video area, thus ensuring that it meets the normal use requirements. Through a series of experiments and tests, it can be concluded that the crowd animation scene, the camera trajectory and the laser camera parameters match the desired effect.

Conclusion

This paper mainly simulates some functions through the establishment and simulation of optical flow constraint equations. First of all, this paper summarizes the change rules of crowd movements according to the analysis of experimental data, and then adds factors such as different positions, different materials and lighting brightness in the scene into the software, and combines the actual situation to design the motion trajectory diagram and human-computer interaction in the corresponding scene. interface. Finally, the animation effect curve is generated by MATLAB, and the required model is generated by the stepper motor drive module and the button control module, and the simulation experiment is carried out after the rationality of the algorithm is verified by simulation. In the future, similar to “Internet +”, it will become more common and frequent to be applied to various affairs in people's life such as study and work; At the same time, the animation industry will also face more severe challenges and opportunities; in the future, with the continuous development and improvement in the field of artificial intelligence and the advent of the era of networking and digitalization, people's living standards and quality will also be higher. Level up. For animation producers, they will pay more attention to research on new technologies and products in the future, so that they can better meet user needs and gain more market share; for users, they will pay more attention to animation. The improvement of the overall quality enables it to occupy an advantageous position in the fierce competition.

Figure 1

Highly imitated realistic drawing process
Highly imitated realistic drawing process

Figure 2

3-D spatial modeling process
3-D spatial modeling process

Figure 3

Animation information collection flow chart
Animation information collection flow chart

Figure 4

Optical flow constraint schematic
Optical flow constraint schematic

Figure 5

Heberip operating system components
Heberip operating system components

Figure 6

Crowd animation effect
Crowd animation effect

Template size and correlation coefficient distribution

Template size 4*5 12* 10 25*18 20*14 25*50
Setting parameters 0.345 0.675 0.636 0.487 0.543

Étienne Mémin, Tanguy Risset. On The Study Of Vlsi Derivation For Optical Flow Estimation[J]. International Journal Of Pattern Recognition And Artificial Intelligence, 2000, 14(4):56–65 ÉtienneMémin TanguyRisset On The Study Of Vlsi Derivation For Optical Flow Estimation [J]. International Journal Of Pattern Recognition And Artificial Intelligence 2000 14 4 56 65 10.1142/S0218001400000295 Search in Google Scholar

He Hongfu. High-fidelity reconstruction of crowd animation for virtual entity object behavior [J]. Information Technology, 2020, 44(08): 69–73. DOI: 10.13274/j.cnki.hdzj.2020.08.014. HeHongfu High-fidelity reconstruction of crowd animation for virtual entity object behavior [J]. Information Technology 2020 44 08 69 73 10.13274/j.cnki.hdzj.2020.08.014 Ouvrir le DOISearch in Google Scholar

Li Zhaolong[1], Shen Tongsheng[2], Lou Shuli[1]. “Calculation method of optical flow field under dynamic background.” Laser and Infrared. (2017):123 LiZhaolong ShenTongsheng LouShuli “Calculation method of optical flow field under dynamic background.” Laser and Infrared 2017 123 Search in Google Scholar

Jiang Tong;. “The Study of Light and Shadow in Animation Scene Design.” The Home of Drama. (2019):122 JiangTong “The Study of Light and Shadow in Animation Scene Design.” The Home of Drama 2019 122 Search in Google Scholar

Wei Wenhong. “Parallel Constrained Differential Evolution Algorithm Based on Hybrid Multi-Constraints Processing Technology.” Computer Applications. (2015):225–230 WeiWenhong “Parallel Constrained Differential Evolution Algorithm Based on Hybrid Multi-Constraints Processing Technology.” Computer Applications 2015 225 230 Search in Google Scholar

Xie, Meifen. “Research on the use of Horn-Schunck optical flow algorithm in motion target detection and tracking.” Journal of Changjiang University, Natural Science Edition: Science and Technology (Upper). (2012):146–147 XieMeifen “Research on the use of Horn-Schunck optical flow algorithm in motion target detection and tracking.” Journal of Changjiang University, Natural Science Edition: Science and Technology (Upper) 2012 146 147 Search in Google Scholar

Qin Longlong, Qian Yuan, Hou Xue, Zhang Xiaoyan. “Horn-Schunck Optical Flow Motion Vector Optimization Algorithm Based on Wiener Linear Prediction.” Computer Engineering and Science. (2015):138–143 QinLonglong QianYuan HouXue ZhangXiaoyan “Horn-Schunck Optical Flow Motion Vector Optimization Algorithm Based on Wiener Linear Prediction.” Computer Engineering and Science 2015 138 143 Search in Google Scholar

Sun Aiting, Liu Qingkun. “Efficient fleet monitoring information collection model.” Computer Engineering and Design.(2010):82–85 SunAiting LiuQingkun “Efficient fleet monitoring information collection model.” Computer Engineering and Design 2010 82 85 Search in Google Scholar

Yang Fan, Hua Qingyi, Zhou Jie. “Nonlinear task model of information acquisition and processing in mobile environment.” Journal of Xi'an Shiyou University(Natural Science Edition).(2012):12+102–106 YangFan HuaQingyi ZhouJie “Nonlinear task model of information acquisition and processing in mobile environment.” Journal of Xi'an Shiyou University(Natural Science Edition) 2012 12+102 106 Search in Google Scholar

Yu Han, Wang Hai, Peng Xin, Zhao Wenyun. “Visualization of software evolution information based on 3D animation.” Computer Science.(2015):42–45 YuHan WangHai PengXin ZhaoWenyun “Visualization of software evolution information based on 3D animation.” Computer Science 2015 42 45 Search in Google Scholar

Cao Mengxiao, Zhang Guijuan, Huang Lijun, Liu Hong. “Crowd animation generation method based on personalized emotional infection.” Computer Science. (2017): 89–94 CaoMengxiao ZhangGuijuan HuangLijun LiuHong “Crowd animation generation method based on personalized emotional infection.” Computer Science 2017 89 94 Search in Google Scholar

Articles recommandés par Trend MD

Planifiez votre conférence à distance avec Sciendo