1. bookAHEAD OF PRINT
Informacje o czasopiśmie
License
Format
Czasopismo
eISSN
2444-8656
Pierwsze wydanie
01 Jan 2016
Częstotliwość wydawania
2 razy w roku
Języki
Angielski
access type Otwarty dostęp

The modelling and implementation of the virtual 3D animation scene based on the geometric centre-of-mass algorithm

Data publikacji: 22 Nov 2021
Tom & Zeszyt: AHEAD OF PRINT
Zakres stron: -
Otrzymano: 06 Jun 2021
Przyjęty: 24 Sep 2021
Informacje o czasopiśmie
License
Format
Czasopismo
eISSN
2444-8656
Pierwsze wydanie
01 Jan 2016
Częstotliwość wydawania
2 razy w roku
Języki
Angielski
Abstract

In the process of modernisation, the essence of virtual 3D animation scene modelling is to enhance the rendering effect of animation, so designers need to understand and skilfully use virtual reality technology. During the scene modelling, the system perception module will obtain the perception information on the basis of understanding the behaviour state of the role, and thus provide the data basis for the emotion module and the behaviour module. Among them, the emotion module is mainly used to adjust the emotional changes of the role, and the behaviour module is mainly used to build the motion database. So this article after understanding the current situation on the basis of the virtual 3D animation scene modelling, use the geometric centroid algorithm for virtual 3D animation scene modelling technology to realise in-depth understanding, and the final results prove that the build system can not only clear intuitive present animation image, but can be in effective connection also, on the basis of multiple motion clips, to maintain the original movement in detail.

Keywords

MSC 2010

Introduction

3D animation as the mainstream of modern technology in the development of animation products and the processing technology used now is technically higher than the original animation processing technology effect, which requires that the staff must be skilled in graphics generation and legend technology. Generally speaking, 3D animation is mainly divided into two aspects: character animation and virtual scene production. The final construction of the scene can make the viewer feel immersive. In their study, Wang Ling, Yu Zhe and Gao Wei et al. proposed to use the 3D animation production auxiliary system to build 3D animation models in the output and rendering of actors’ actions and facial expressions data on the Unity platform, so as to improve the production quality of 3D animation. Yang Jinqiu, Tong Lijing and Fu Xiaoqin et al. proposed to construct 3D face model by combining the face features based on the analysis of facial modelling animation system extracted by features, which is conducive to improving the accuracy and real-time performance of the system operation. Although the above studies have verified the effectiveness of three-part animation modelling, the actual rendering effect is not realistic, so it is difficult to bring immersive viewing effect for the audience. Therefore, after understanding the existing 3D animation system built by virtual reality technology, this paper puts forward new requirements for the verisimilitude of the final rendering effect [1].

Method
Virtual 3D animation system

Combined with the analysis of the system structure diagram shown in Figure 1, it can be seen that the three-part animation system designed and analysed by using virtual reality technology is mainly divided into four parts: First, the perception module. This part refers to the acquisition of simulated information of virtual characters according to different virtual scenes, and the conversion of the selected props and the data of nearby characters into perceptual information, so as to provide services for the emotional module and behaviour module. Second one the emotional module [2]. This part refers to including the characters’ customised emotions in the plot description, and setting up personalised emotion transformation strategy, so as to improve the rendering effect of 3D animation characters while changing emotions in time. Third is the behaviour module. This part refers to the behaviour selection ability of animation characters. Their performance ability and reaction ability during sports are mainly affected by the design level of behaviour modules. Fourth is three animation generation module. This part is the core content of the operation of 3D animation system, which involves motion control, scene modelling, character modelling and other aspects of the content. From the overall perspective, the flow chart of 3D animation modelling is shown in Figure 2 [3].

Fig. 1

System frame diagram

Fig. 2

3D animation production flow chart

Definition 1

In the three-digit animation system, the character knows how to act refers to motion (T), and adjusting the original posture (Ti) of the character can obtain the new posture (T). This process belongs to visual interactive action design. Generally speaking, the size of the user window is F × G, and the role model is moved with the mouse reasonably. The change in F direction is Δ F, and the change in G direction is Δ G. Combined with Euler’s force analysis, the Euler Angle.

< a,β ,γ > is regarded as the rotation Angle in D, F and G directions, then the actual reasoning relation is: sina=aΔf/F(1a)Δg/Gsinβ=bΔf/F(1b)Δg/Gsinγ=cΔf/F(1c)Δg/G \matrix{{\sin a = a\Delta f/F\left({1 - a} \right)\Delta g/G} \hfill \cr {\sin \beta = b\Delta f/F\left({1 - b} \right)\Delta g/G} \hfill \cr {\sin \gamma = c\Delta f/F\left({1 - c} \right)\Delta g/G} \hfill \cr}

Theorem 1

In the above formula, A, B and C represent the influence factors, which refer to the influence of Δ F and Δ G in the three directions above on Euler Angle < a,β ,γ >. Combined with computational analysis, you can finally get the role model’s latest posture as “Posture”.

After completing the construction of three animation characters and scenes, we need to analyse the principle of texture mapping. In essence, the principle mapping technology can simulate the fine but irregular colour texture of the surface of the human body or object, which can be applied to 3D animation modelling to simplify the actual modelling operation process. The processing of virtual 3D animation modelling based on texture mapping technology mainly starts from two points: on the one hand, it will be clear which part of the surface parameters of human body or object needs to be defined into texture form on the basis of judging texture attributes; On the other hand, we should not only build the mapping relationship between the texture space and the human body or object space, but also master the mapping relationship between the human body or object space and the screen space [4].

Proposition 2

Combined with the above image analysis, it can be seen that M represents the transformation of human body or object space to texture space, and T represents the transformation of screen space to human body or object space, both of which conform to the following formula:

T of Q is equal to P

M (P) = (u, v)

Lemma 3

The specific calculation process is as follows:

It is assumed that the vertices of the triangle are respectively V1, V2 and V3, and [v1v2]* [v2v3] == is in a vertical state with the triangle. After normalisation, the normal vector V of the triangle can be determined, and then the average value of the sum of all vectors can be calculated to determine the overall direction of the module. The specific formula is as follows: Q=sinai=1viF(1a)ΔgG Q = \sin a\sum\limits_{i = 1} {v_i} - {{F\left({1 - a} \right)\Delta g} \over G}

Corollary 4

In order to analyse the mapping relationship between the surface piece and the corresponding texture coordinates, the perspective projection transformation can be used to calculate the texture coordinates of the mesh points, as shown below: k[uv1]T=kX=HXH=[h100h40h20h500h31] \matrix{{k{{\left[ {uv1} \right]}^T} = kX' = HX} \hfill \cr {H = \left[ {\matrix{{{h_1}00{h_4}} \hfill \cr {0{h_2}0{h_5}} \hfill \cr {00{h_3}1} \hfill \cr}} \right]} \hfill \cr}

In the above formula, H1, H2, H3, H4 and H5 respectively represent unknown functions; the texture coordinate is (u, v); the corresponding homogeneous texture coordinate is X’; the grid point space homogeneous coordinate is X; H represents the 3 × 4 perspective projection matrix; and it represents the constant coefficient. Combined with the above formula analysis, it can be seen that the mesh vertices and texture ordering points corresponding to each group have two independent linear equations. It is assumed that three groups of feature points must be selected in order to adjust the matrix H.

Conjecture 5. After studying all the texture mapping processes, it can be seen that the image size can be designed as m × n, which can simplify the above formula. The specific process is as follows: [uv1]T=[1/m0001/n0001][100s010t0011][xyz]T=[1/m00s/m01/n0t/n0001][xyz1] \matrix{{{{\left[ {uv1} \right]}^T} = \left[ {\matrix{{1/m00} \hfill \cr {01/n0} \hfill \cr {001} \hfill \cr}} \right]\left[ {\matrix{{100s} \hfill \cr {010t} \hfill \cr {0011} \hfill \cr}} \right]{{\left[ {xyz} \right]}^T} = \left[ {\matrix{{1/m00s/m} \hfill \cr {01/n0t/n} \hfill \cr {0001} \hfill \cr}} \right]\left[ {xyz1} \right]} \hfill \cr}

In the above formula, s and t represent the unknown parameters, and (x, y, z) represent the vertex coordinates of the surface. At the same time, only one set of feature points is needed in the formula to calculate and obtain unknown parameters. In the process of texture acquisition, all textures are the minimum bounding box, and there are multiple tangential points on the edge of the image, so the corresponding feature points can be obtained quickly.

Assume that the texture coordinates (u’, v’) and the vertex coordinates (x’, y’, z’) of the surface belong to the same set of feature points; then using them into the above formula to obtain: [uv1]T=[1/m00(ux)/m01/n0(vy)/n0001][xyz1]T \matrix{{{{\left[ {uv1} \right]}^T} = \left[ {\matrix{{1/m00\left({u' - x'} \right)/m} \hfill \cr {01/n0\left({v' - y'} \right)/n} \hfill \cr {0001} \hfill \cr}} \right] \cdot} \hfill \cr {{{\left[ {xyz1} \right]}^T}} \hfill \cr}

Geometric centroid algorithm
Example 6

The centroid algorithm is used to improve the test accuracy of data processing. Common application forms of the algorithm are divided as follows:

First, ordinary centroid algorithm: (xc,yc)=ijxijIijijIij \left({{x_c},{y_c}} \right) = {{\sum\limits_{ij} {x_{ij}}{I_{ij}}} \over {\sum\limits_{ij} {I_{ij}}}}

Note 7. In the above formula, Iij represents the light intensity obtained by the connection of all pixel points in a two-dimensional image. Such algorithms are mainly used in image processing without background noise, or under the condition of the same background noise or too high signal-to-noise ratio.

Second, the strongly weighted centroid algorithm: xc=j=y0w0,y/2y0+w0,y/2i=x0w0,x/2x0+w0,x/2xiIijwj=y0w0,y/2y0+w0,y/2i=x0w0,x/2x0+w0,x/2Iijw {x_c} = {{\sum\limits_{j = {y_0} - {w_0},y/2}^{{y_0} + {w_0},y/2} \sum\limits_{i = {x_0} - {w_0},x/2}^{{x_0} + {w_0},x/2} {x_i}{I_{ij}}w} \over {\sum\limits_{j = {y_0} - {w_0},y/2}^{{y_0} + {w_0},y/2} \sum\limits_{i = {x_0} - {w_0},x/2}^{{x_0} + {w_0},x/2} {I_{ij}}w}}

In this kind of algorithm, three forms of weighting function are mainly involved, as shown below: yc=i=x0W0,x/2x0+W0,x/2j=y0W0,y/2y0+W0,y/2yjIijwi=x0W0,x/2x0+W0,x/2j=y0W0,y/2y0+W0,y/2Iijw {y_c} = {{\sum\limits_{i = {x_0} - {W_0},{\kern 1pt} x{\kern 1pt} /{\kern 1pt} 2{\kern 1pt}}^{{x_0} + {W_0},{\kern 1pt} x{\kern 1pt} /{\kern 1pt} 2{\kern 1pt}} \sum\limits_{j = {y_0} - {W_0},{\kern 1pt} y{\kern 1pt} /{\kern 1pt} 2{\kern 1pt}}^{{y_0} + {W_0},{\kern 1pt} y{\kern 1pt} /{\kern 1pt} 2{\kern 1pt}} {y_j}{I_{ij}}w} \over {\sum\limits_{i = {x_0} - {W_0},{\kern 1pt} x{\kern 1pt} /{\kern 1pt} 2{\kern 1pt}}^{{x_0} + {W_0},{\kern 1pt} x{\kern 1pt} /{\kern 1pt} 2{\kern 1pt}} \sum\limits_{j = {y_0} - {W_0},{\kern 1pt} y{\kern 1pt} /{\kern 1pt} 2{\kern 1pt}}^{{y_0} + {W_0},{\kern 1pt} y{\kern 1pt} /{\kern 1pt} 2{\kern 1pt}} {I_{ij}}w}}

Open Problem 8. Where, A and P represent strength values. Generally speaking, the above algorithm is mainly used to analyse and detect the centroid of light spot.

Third, distance centroid algorithm. xc=ijxiIijWijijIijWijyc*=ijyijIijWijijIijWijWij=1s=1(xixc)2+(yjyc)2 \matrix{{{x_c} = {{\sum\limits_{ij} {x_i}{I_{ij}}{W_{ij}}} \over {\sum\limits_{ij} {I_{ij}}{W_{ij}}}}} \hfill \cr {y_c^* = {{\sum\limits_{ij} {y_{ij}}{I_{ij}}{W_{ij}}} \over {\sum\limits_{ij} {I_{ij}}{W_{ij}}}}} \hfill \cr {{W_{ij}} = {1 \over s} = {1 \over {\sqrt {{{\left({{x_i} - {x_c}} \right)}^2} + {{\left({{y_j} - {y_c}} \right)}^2}}}}} \hfill \cr}

In the above formula, (xi, yj) represents the coordinates of the detected pixel, (xc, yc) represents the central coordinates of the spot, ( xc* x_c^* , yc* y_c^* ) refers to the centroid coordinates of the spot obtained by calculation, and Iij refers to the value of the current pixel.

The image centroid is the centre of gravity of the image grey level. Assume that the image has two directions of I and j, then m and n represent the number of pixels in the two directions of I and j, respectively, and g (I, j) represents the grey value of the region where the pixel point (I, j) is located. Then the coordinate expression of the corresponding image centroid position is as follows: x=j=1ni=1mg(i,j)×ij=1ni=1mg(i,j)y=j=1ni=1mg(i,j)×jj=1ni=1mg(i,j) \matrix{{x = {{\sum\limits_{j = 1}^n \sum\limits_{i = 1}^m g\left({i,j} \right) \times i} \over {\sum\limits_{j = 1}^n \sum\limits_{i = 1}^m g\left({i,j} \right)}}} \hfill \cr {y = {{\sum\limits_{j = 1}^n \sum\limits_{i = 1}^m g\left({i,j} \right) \times j} \over {\sum\limits_{j = 1}^n \sum\limits_{i = 1}^m g\left({i,j} \right)}}} \hfill \cr}

Fourth is the traditional centroid algorithm. Assuming that the coordinate of the pixel in row I’s column j is (xi, yj) and the grey value is G (xi, yj), then the centroid of its star can be expressed as: x^=i=i1i2j=j1j2xiG(xi,yj)i=i1i2j=j1j2G(xi,yi) \hat x = {{\sum\limits_{i = {i_1}}^{{i_2}} \sum\limits_{j = {j_1}}^{{j_2}} {x_i}G\left({{x_i},{y_j}} \right)} \over {\sum\limits_{i = {i_1}}^{{i_2}} \sum\limits_{j = {j_1}}^{{j_2}} G\left({{x_i},{y_i}} \right)}}

In previous research on the positioning accuracy of the algorithm, let, and because the grey value of the star pixel has noise, it can be obtained as follows:

In the above formula, S I represents the grey value of the signal, N and I represents the grey value of the noise. By integrating the above formulas, we can get: x^=i=i1i2xi(Si+Ni)i=i1i2(Si+Ni)=i=i1i2xiSii=i1i2Si(1i=i1i2Nii=i1i2(Si+Ni))+i=i1i2xiNii=i1i2(Si+Ni)=x¯(1+η1)+η2 \hat x = {{\sum\limits_{i = {i_1}}^{{i_2}} {x_i}\left({{S_i} + {N_i}} \right)} \over {\sum\limits_{i = {i_1}}^{{i_2}} \left({{S_i} + {N_i}} \right)}} = {{\sum\limits_{i = {i_1}}^{{i_2}} {x_i}{S_i}} \over {\sum\limits_{i = {i_1}}^{{i_2}} {S_i}}}\left({1 - {{\sum\limits_{i = {i_1}}^{{i_2}} {N_i}} \over {\sum\limits_{i = {i_1}}^{{i_2}} \left({{S_i} + {N_i}} \right)}}} \right) + {{\sum\limits_{i = {i_1}}^{{i_2}} {x_i}{N_i}} \over {\sum\limits_{i = {i_1}}^{{i_2}} \left({{S_i} + {N_i}} \right)}} = \bar x\left({1 + {\eta _1}} \right) + {\eta _2}

And the above formula meets the following conditions: x¯=i=i1i2xiSii=i1i2Si,η1=i=i1i2Nii=i1i2(Si+Ni),η2=i=i1i2xiNii=i1i2(Si+Ni) \bar x = {{\sum\limits_{i = {i_1}}^{{i_2}} {x_i}{S_i}} \over {\sum\limits_{i = {i_1}}^{{i_2}} {S_i}}},{\eta _1} = - {{\sum\limits_{i = {i_1}}^{{i_2}} {N_i}} \over {\sum\limits_{i = {i_1}}^{{i_2}} \left({{S_i} + {N_i}} \right)}},{\eta _2} = {{\sum\limits_{i = {i_1}}^{{i_2}} {x_i}{N_i}} \over {\sum\limits_{i = {i_1}}^{{i_2}} \left({{S_i} + {N_i}} \right)}}

When calculating median filtering, the specific formula is as follows: g(m,n)=Median{f(mk,n1),(k,l)W} g\left({m,n} \right) = Median\left\{{f\left({m - k,n - 1} \right),\left({k,l} \right) \in W} \right\}

The corresponding weighted median filter is: yi=Weighted_Med(xi1,xi,xi+1) {y_i} = Weighted\_Med\left({{x_{i - 1}},{x_i},{x_{i + 1}}} \right)

From a physical perspective, the centroid refers to a three-dimensional manifold W in an Euclidean space. The coordinates of its centroid C can be calculated by using the weighted average coordinates of all internal points. The specific formula is as follows: c=Ωρ(x)xdσΩρ(x)dσ c = {{\int_\Omega \rho \left(x \right)xd\sigma} \over {\int_\Omega \rho \left(x \right)d\sigma}}

In the above formula, P represents the density distribution function in the three-dimensional manifold, so it can be clear that its centroid is consistent with the minimised moment of inertia. The specific formula is as follows: c=argminyΩρ(x)xy2dσ c = \mathop {\arg \min}\limits_y \int_\Omega \rho \left(x \right){\left\| {x - y} \right\|^2}d\sigma

Therefore, in the algorithm analysis, the corresponding definition formula can be obtained by replacing the Euclidean distance with the internal distance of the three-dimensional manifold: c=argminyΩρ(x)dg2(x,y)dσ c = \mathop {\arg \min}\limits_y \int_\Omega \rho \left(x \right)d_g^2\left({x,y} \right)d\sigma

In the above formula, c represents the region where the centre of mass is located, p (x) represents the density of point x and dg (x, y) represents the internal distance between point x and point y. The corresponding energy function is: E(y)=Ωρ(x)dg2(x,y)dσ E\left(y \right) = \int_\Omega \rho \left(x \right)d_g^2\left({x,y} \right)d\sigma

Its gradient formula can be expressed as: Ey=2Ωρ(x)dg(x,y)dg(x,y)ydσ {{\partial E} \over {\partial y}} = 2\int_\Omega \rho \left(x \right){d_g}\left({x,y} \right){{\partial {d_g}\left({x,y} \right)} \over {\partial y}}d\sigma

When calculating and analysing the geometric internal distance, the heat conduction equation can be used to study. Suppose that the solution of the equation representing heat conduction, where T refers to the time passed, and X and and respectively represent two points in the space, then the corresponding equation is: μt=12Δμ {{\partial \mu} \over {\partial t}} = {1 \over 2}\Delta \mu

Combined with the theoretical analysis proposed by Varadhan, it can be known that there is the following relationship between the distance d at two points in Euclidean space and the heat transferred through time t: μ(t,x,y)=(2π)k2ed2(x,y)2t,t0. \mu \left({t,x,y} \right) = {\left({2\pi} \right)^{{{- k} \over 2}}}{e^{{{{d^2}\left({x,y} \right)} \over {- 2t}}}},t \to 0.

In the above formula, K represents the dimension of the region.

Next, we need to calculate and analyse the discretisation of 3D manifolds and the discretisation of differential operators. Taking the latter as an example, if f is a scalar function defined inside the geometry, then in the discrete case, f can be expressed as f = (f1, f2. . . . Fn), and fi represents the value of the function at the i-th vertex. It can be seen that the gradient of function f can be calculated by using the function value of tetrahedral vertices as shown below: (f)i=13VijSjfjnj {(\nabla f)_i} = {1 \over {3{V_i}}}\sum\limits_j {S_j}{f_j}{n_j}

In the above formula, Vi represents the area of tetrahedron I, fj represents the function value of vertex j and Sj and nj represent the area of the triangle corresponding to the tetrahedron and vertex and its unit normal vector, respectively. In essence, discretisation refers to the divergence of the vector field. Suppose g = (g1,g2,...,gm) represents the piecewise linear vector field within the tetrahedron grid, where m represents the number of tetrahedron in the grid, and the divergence formula for the vertex i is: (g)i=jSj3VjnjVjgj {\left({\nabla \cdot g} \right)_i} = \sum\limits_j {{{S_j}} \over {3{V_j}}}{n_j} \cdot {V_j} \cdot {g_j} \cdot

Result analysis

3D animation creation contains two aspects, on the one hand refers to the role modelling while on the other hand is the scene modelling. This paper mainly starts with the scene modelling of geometric centroid algorithm and analyses the verisimilitude of the final rendered picture and the coherence of the action. Taking basketball as an example, it is necessary to complete relevant operations according to visual recognition, feature extraction and other content requirements to transform the moving process of an athlete into a virtual three-dimensional animation scene. At the same time, texture mapping technology is also used in the analysis operation in this paper. By comparing and analysing the final animation images, it can be seen that the actual effect is not only clearer and lifelike, but also can guarantee the continuity of the moving picture. In order to further verify the modelling effect of the virtual 3D animation scene outlined in this paper, it is necessary to conduct an experimental analysis on the basis of obtaining athlete videos. The number of clips is 60 frames, so every 10 frames is set as one segment. Finally, in the virtual 3D animation video, the selected motion segment smooth connection is shown in Figure 3 [5].

Fig. 3

Comparison of smooth kinematic connections between left and right elbow joints and calcaneal joints

From the comparison and analysis of the above pictures, it can be seen that the system outlined in this paper is more gentle in the junction area of the two segments, which can not only guarantee the effective connection of the two movements, but also fully show the actual motion details and inherit the original motion data. It is proved that the modelling of virtual 3D animation scene based on geometric centroid algorithm is more real and effective.

In addition, according to the case analysis of other types of virtual 3D animation scene modelling outlined in this paper, the operation methods based on visual recognition and feature extraction are the most common. By comparing and analysing the motion of the two systems and the connection effect of multiple motion fragments of the system outlined in this paper, the following results are obtained, as shown in Figure 4 [7].

Fig. 4

Comparison and analysis diagram of three systems based on the connection of multiple motion segments

From the above pictures, it can be seen that the system outlined in this paper is relatively smooth in the connecting area of motion fragments, while the other two systems have many problems in the connecting area, so it is difficult to comprehensively present all motion fragments. Thus, virtual 3D animation scene shows that the reasonable use of virtual reality technology and the geometric centroid algorithm, not only can improve the speed of system modelling, but can also guarantee the image rendering effects, can also expand the application range of virtual reality technology and can promote the art of the fusion of technology and animation industry development in our country.

Conclusion

To sum up, under the background of new era, in the face of increasingly innovation market environment, the three dimensional animation industry to achieve sustainable development, must attach importance to the virtual reality technology and combining the application of these algorithms and exploration; focussing on the comprehensive study 3D animation system has started with the modelling work and thus speed up the creation of real works and the realistic effect of character or object. At the same time, according to the development direction of 3D animation industry, we should vigorously cultivate excellent technical and managerial talents, which can not only improve the comprehensive level of 3D animation industry, but also promote the pace of integration of art, modern technology, traditional industry and animation industry. In addition, the existing virtual reality technology and 3D animation system should be continuously optimised and innovated according to the understanding and demand of consumer groups for 3D animation scenes. Only in this way can the steady development of 3D animation industry be guaranteed [8].

Fig. 1

System frame diagram
System frame diagram

Fig. 2

3D animation production flow chart
3D animation production flow chart

Fig. 3

Comparison of smooth kinematic connections between left and right elbow joints and calcaneal joints
Comparison of smooth kinematic connections between left and right elbow joints and calcaneal joints

Fig. 4

Comparison and analysis diagram of three systems based on the connection of multiple motion segments
Comparison and analysis diagram of three systems based on the connection of multiple motion segments

Fig. 1

System frame diagram
System frame diagram

Fig. 2

3D animation production flow chart
3D animation production flow chart

Fig. 3

Comparison of smooth kinematic connections between left and right elbow joints and calcaneal joints
Comparison of smooth kinematic connections between left and right elbow joints and calcaneal joints

Fig. 4

Comparison and analysis diagram of three systems based on the connection of multiple motion segments
Comparison and analysis diagram of three systems based on the connection of multiple motion segments

Ruchuan Wang, Chenyun Xin, Dengyin Zhang, et al. Research on 3 D Geometry modeling Algorithm Based on VRML [J]. Communication Journal, 2003 (07): 74–79 WangRuchuan XinChenyun ZhangDengyin Research on 3 D Geometry modeling Algorithm Based on VRML [J] Communication Journal 2003 07 74 79 Search in Google Scholar

Yanbo Chen. Application Research of 3 D Virtual Campus Model Based on 3 D Animation Foundation and Modeling Course [J]. Digital Technology and Application, 2020,038 (003): 86–87 ChenYanbo Application Research of 3 D Virtual Campus Model Based on 3 D Animation Foundation and Modeling Course [J] Digital Technology and Application 2020 038 003 86 87 Search in Google Scholar

Xiu Yang. Based on 3 D Indoor Scene Synthesis Algorithm in Virtual Reality Environment [J]. Progress, 2018,000 (003): 104–105 YangXiu Based on 3 D Indoor Scene Synthesis Algorithm in Virtual Reality Environment [J] Progress 2018 000 003 104 105 Search in Google Scholar

A Adán, P Merchán, Salamanca S. 3D scene retrieval and recognition with Depth Gradient Images [J]. Pattern Recognition Letters, 2011, 32(9):1337–1353. AdánA MerchánP SalamancaS 3D scene retrieval and recognition with Depth Gradient Images [J] Pattern Recognition Letters 2011 32 9 1337 1353 10.1016/j.patrec.2011.03.016 Search in Google Scholar

Chunyan Shi. Research on Virtual Reality Method of Interior Design Based on 3D Vision [J]. Modern Electronic Technology, 2018, v.41; No.508(05):78–82. ShiChunyan Research on Virtual Reality Method of Interior Design Based on 3D Vision [J] Modern Electronic Technology 2018 41 No.508 05 78 82 Search in Google Scholar

Changshui Zhu, Jianlong Shao. Realization of 3D Dynamic Scene Technology Based on Virtual Reality Modeling Language [J]. Computer Knowledge and Technology (2):404–405. ZhuChangshui ShaoJianlong Realization of 3D Dynamic Scene Technology Based on Virtual Reality Modeling Language [J] Computer Knowledge and Technology 2 404 405 Search in Google Scholar

Ji Fan, Nan Zhang. Research on Dynamic Loading Algorithm in Virtual 3D Scene [J]. Electronic Manufacturing, 2020, No.410(24):49–52. FanJi ZhangNan Research on Dynamic Loading Algorithm in Virtual 3D Scene [J] Electronic Manufacturing 2020 No.410 24 49 52 Search in Google Scholar

Chmilar M, Wyvill B, Herr C. A software architecture for integrating modeling with kinematic and dynamic animation[J]. The Visual Computer, 1991, 7(2–3):122–137. ChmilarM WyvillB HerrC A software architecture for integrating modeling with kinematic and dynamic animation[J] The Visual Computer 1991 7 2–3 122 137 10.1007/BF01901183 Search in Google Scholar

Polecane artykuły z Trend MD

Zaplanuj zdalną konferencję ze Sciendo