1. bookAHEAD OF PRINT
Détails du magazine
License
Format
Magazine
eISSN
2444-8656
Première parution
01 Jan 2016
Périodicité
2 fois par an
Langues
Anglais
Accès libre

Animation VR scene mosaic modeling based on generalized Laplacian equation

Publié en ligne: 15 Jul 2022
Volume & Edition: AHEAD OF PRINT
Pages: -
Reçu: 18 Apr 2022
Accepté: 15 Jun 2022
Détails du magazine
License
Format
Magazine
eISSN
2444-8656
Première parution
01 Jan 2016
Périodicité
2 fois par an
Langues
Anglais
Introduction

Three dimensional animation belongs to the mainstream animation products, and the processing technology of three-dimensional animation is also higher than the traditional cartoon processing technology. Practitioners must be proficient in scene generation and processing technology. Three dimensional animation is mainly divided into character animation and virtual scene production, so that viewers have an immersive feeling. Animation scene refers to the occasion and environment of animation character activities and performances, which is different from the concept of “background”. The “Scene” not only refers to the space behind the character activities, but also focuses on expressing the concept of space in time. Different story fragments are expressed in specific scenes [1]. Animation scene design is the modeling design of all objects in the film that change with time except the character modeling. In animation, the animation character is the main body of the story, and the animation scene should be designed closely around the performance of the character. The scene can show and explain the place and furnishings of the character, the social environment, historical environment and natural environment of the character, and even the mass character [2]. Based on the script, the scene design pays attention to the design and shaping of time and space, and serves to show the plot, complete the drama conflict and depict the character of the character. Therefore, when designing scenes, we must grasp the overall modeling form of animation films according to certain thinking methods and follow the aesthetic requirements of visual art [3]. The 3D animation production assistant system based on visual recognition designed by Ding H and others outputs and renders actor action and expression data through unity platform, constructs 3D animation model, and improves the efficiency and quality of 3D animation production [4]; The facial modeling and animation system based on feature extraction designed by Sahu A and others constructs a real-time 3D facial animation system by extracting facial feature points and processing 3D facial model, so as to improve the real-time and accuracy of the system. The above two systems can build three-dimensional animation, but the realistic effect of animation is relatively poor, which can not create a good sense of immersiveness for viewers [5]. Ma L and others believe that immersive 3D animation is an increasingly rising form of artistic expression in the field of animation in recent years. This technology breaks through the limitations of two-dimensional space and relies on computer vision technology and scene technology to build a virtual immersive three-dimensional space through the steps of building models, drawing materials, simulating lights and designing motion pictures. The production of three-dimensional animation is different from the traditional three-dimensional animation creation. The basic characteristic of “three-dimensional” runs through its creation all the time [6]. Zheng S and others said that the purpose of three-dimensional rendering is to present three-dimensional spatial information to the viewer, simulate the human visual system, perceive the depth, and provide a three-dimensional scene pair with parallax to make the human eye perceive the depth information. By using the virtual stereo camera to shoot and render the animation process, the picture of three-dimensional animation close to the real scene is generated. Three dimensional technology is widely used in virtual reality, games, film and television production, advertising and other fields [7]. The concept of “immersive” is extended from virtual reality technology. Watching 3D animation is like experiencing a deeper and wider world through a window. However, in virtual reality, this boundary is broken and enters the next stage. The immersive perspective has no boundary. Without boundary, you can't feel the screen plane or enter the window of another world, so that the viewer can obtain the perspective without boundary obstacles [8].

Method
Modeling principle of generalized Laplacian equation

Laplacian equation is a signal analysis method based on time scale, which has the characteristics of multi-resolution analysis and the ability to express local characteristics. This method can decompose the source scene into details of different scales and directions and a lowest approximation. The lowest approximation carries the average information of the scene, that is, it contains most of the energy of the whole scene; The details of different scales and directions contain the high-frequency or edge information of the scene [9]. In the continuous wavelet transform, let f(t) and ψ(t) be square integrable, and ψ (w) be the Fourier transform of mountain ψ(t), which meets the allowable conditions, as shown in equation 1: +ψ(ω)ωdω< \int\limits_{ - \infty }^{ + \infty } {{{\psi \left( \omega \right)} \over \omega }} d\omega < \infty Wf(x,y)=1x+f(t)ψ(tyx)dt {W_f}\left( {x,y} \right) = {1 \over {\sqrt x }}\int\limits_{ - \infty }^{ + \infty } {f\left( t \right)\psi \left( {{{t - y} \over x}} \right)dt}

Equation 2 is the continuous wavelet transform of f(t). Where, ψ (t) is called the wavelet generating function, a is the scaling factor, and b is the translation parameter. Let the two-dimensional signal of the scene be d. for a given scale function and wavelet coefficient, the wavelet decomposition of the scene is shown in equation 3: { Fk(n,m)=nmFk1(n,m)h(2ni)h(2mj)dkH(n,m)=nmFk1(n,m)h(2ni)h(2mj)dkV(n,m)=nmFk1(n,m)g(2ni)h(2mj)dkD(n,m)=nmFk1(n,m)g(2ni)h(2mj) \left\{ {\matrix{ {{F_k}\left( {n,\,m} \right) = \sum\limits_n {\sum\limits_m {{F_{k - 1}}\left( {n,\,m} \right)h\left( {2n - i} \right)h\left( {2m - j} \right)} } } \hfill \cr {d_k^H\left( {n,\,m} \right) = \sum\limits_n {\sum\limits_m {{F_{k - 1}}\left( {n,\,m} \right)h\left( {2n - i} \right)h\left( {2m - j} \right)} } } \hfill \cr {d_k^V\left( {n,\,m} \right) = \sum\limits_n {\sum\limits_m {{F_{k - 1}}\left( {n,\,m} \right)g\left( {2n - i} \right)h\left( {2m - j} \right)} } } \hfill \cr {d_k^D\left( {n,\,m} \right) = \sum\limits_n {\sum\limits_m {{F_{k - 1}}\left( {n,\,m} \right)g\left( {2n - i} \right)h\left( {2m - j} \right)} } } \hfill \cr } } \right. Where, Fk, dkH d_k^H , dkV d_k^V and dkD d_k^D are the low-frequency component, horizontal high-frequency component, vertical high-frequency component and diagonal high-frequency component of the source scene at this resolution respectively.

Animation scene splicing

The 3D animation scene splicing system is mainly composed of perception module, emotion module, behavior module and 3D animation generation module [10]. Figure 1 is the overall framework of 3D animation system.

Figure 1

Overall framework of 3D animation system

The sensing module extracts simulated information of virtual characters based on different virtual scenes, converts tools in the selected scene and materials of nearby characters into sensing information, and serves the emotion module and action module. The emotion module includes the customized character emotion in the plot description, sets the personalized emotion transformation strategy, updates the emotion in time, and increases the realistic effect of three-dimensional animation characters. The behavior module refers to the behavior selection ability of a 3D cartoon character, and the good or bad of the role acting ability and reaction ability is determined by the design level of the behavior module[11]. The character behavior includes the customized character actions in the plot description. Combined with the perceptual information, it forms a motion library, and forms a series of action sequences according to the online motion control. 3D animation generation module belongs to the core of 3D animation system, which includes character modeling, scene modeling and motion control. Character modeling means that it can not only show the appearance of the virtual character, but also show the physical state of the character's body, such as height and weight.

Scene modeling is to manage and operate according to the way of scene database. The scene database can present all the details of the real environment applied to the virtual scene and build a realistic 3D scene. Motion control makes animation characters move, which belongs to the advanced control method of virtual character motion [12]. Based on the character motion library, when making three-dimensional animation, you can complete the cumbersome motion simulation of virtual characters only by describing the behavior characteristics, without complex partial detail adjustment. Based on the constructed target 3D animation character prototype and action library, users only need to select the character prototype they need and simply change the parameters such as the height and body proportion of the character prototype to obtain the 3D animation target character [13], as shown in Figure 2.

Figure 2

Flowchart of the animation model

The 3D animation target character is consistent with the original actions in the skeleton, skin, controller and action library of the 3D animation character prototype. The 3D animation character template composed of virtual bone, skin and controller is the 3D animation prototype. A series of basic actions designed for 3D animation character prototype is 3D animation character action library [14]. Users can freely change the proportion and shape of 3D animation characters according to the graphic interface, design target 3D animation characters and build new 3D animation characters. Using motion capture technology to obtain a large number of action examples of character motion, and store them in the corresponding motion library. The obtained motion examples are displayed in three-dimensional form through virtual artificial synthesis software, so that the three-dimensional animation system can realize visualization and facilitate the change of character model action. The rationality of the new action is verified by Newton Euler motion model. The 3D animation system sets the character action as motion (x) and changes the original pose postion (xi) to obtain a new character pose postion (xi). the above operations can realize the visual interactive action design. The size of the user window is H × K. move the role model with the mouse. When moving, the change in direction h is Δh and the change in direction k is Δk. according to the Euler theorem, the Euler angle < α, β, γ > represents the rotation in direction l, h, k the relationship after reasoning is as follows: sinα=zΔh/F(1z)Δk/Hsinβ=xΔh/F(1x)Δk/Hsinγ=yΔh/F(1y)Δk/H \matrix{ {\sin \alpha = z\Delta h/F\left( {1 - z} \right)\Delta k/H} \hfill \cr {\sin \beta = x\Delta h/F\left( {1 - x} \right)\Delta k/H} \hfill \cr {\sin \gamma = y\Delta h/F\left( {1 - y} \right)\Delta k/H} \hfill \cr } Where: z, x, y are the influence factors respectively, representing the influence degree of Δh and Δk on Euler angle < α, β, γ > in the three directions of l, h, k. The new pose of the character model is obtained through calculation, and the simulation of the three-dimensional animation character model is realized. Animation scene modeling technology uses computer graphics to abstract the real scene and build the three-dimensional geometric model of virtual landscape in the form of polygons, including terrain and architecture. At the same time, it also needs to build the lighting and material model in the virtual scene.

Results and analysis
Analysis of animation scene splicing system

After the 3D animated character and scene construction is complete, you need to set up texture maps and control parameters for the model to improve realism. Texture mapping technology can simulate figures (objects) such as elaborate and irregular color textures. Using texture mapping technology, any plane graphics (scene) can be covered on the surface of 3D animation model, so that the model surface can generate more real color texture, increase the authenticity of 3D animation and simplify the modeling process. Texture mapping technology has two steps:

Determining the texture properties determines which parameters of the person's (object) surface should be defined in the form of a texture;

Constructs a mapping between texture space and human (object) space, and between human (object) space and screen space.

Only basic contour features and lack of surface texture details will reduce the realism. This requires texture mapping for each module to increase the realism of the model. The overall direction of the surface patch determines which spatial plane the surface patch is projected to, and it is projected to the plane with the smallest angle between the overall direction of the surface patch and the plane [15]. Since some parts of the surface of the person (object) are curved, the total direction of the curved surface needs to be calculated. The calculation method is as follows:

Suppose that the three vertices of a triangular patch are v1, v2, v3 respectively, [v1v2] * [v2v3] is perpendicular to the patch. After normalizing it, obtain the normal vector V of the triangular patch, calculate the sum and mean of the normal vectors of all patches, obtain the average vector sum representing the module, and master the overall direction of the module, as shown in equation 5: W=sinαi=1viF(1a)ΔgG W = \sin \alpha \sum\limits_{i = 1} {{v_i} - {{F\left( {1 - a} \right)\Delta g} \over G}}

To find out the mapping relationship between the surface patch and the corresponding texture coordinates, the texture coordinates of the grid points can be calculated using the perspective projection transformation. As shown in equations 6 and 7: k[ uv1 ]T=kX=HX k{\left[ {u\,v\,1} \right]^T} = kX^\prime = HX H=[ h100h40h20h500h30 ] H = \left[ {\matrix{ {{h_1}} & 0 & 0 & {{h_4}} \cr 0 & {{h_2}} & 0 & {{h_5}} \cr 0 & 0 & {{h_3}} & 0 \cr } } \right] Where: h1, h2, h3, h4, h5 are unknown parameters; The texture coordinate is (u, v), and the corresponding homogeneous texture coordinate is X; X-coordinate is homogeneous grid; The perspective projection matrix of 3 × 4 is H, and the constant coefficient is k. According to equations 6 and 7, it is found that each group of corresponding mesh vertices and texture vertices have two independent linear equations. To transform matrix H, three groups of feature points need to be selected. By analyzing the whole texture mapping process and combining the obtained texture image size, the texture image size is set to m × n, which can simplify equation 7.

Experimental analysis and results

Laplacian operator is a second-order differential operator, which has a good effect on the detection of scene edge position. Use wavelet transform to decompose the scene to obtain low-frequency coefficient N(n, m), and convolute N(n, m) with two Laplacian operators to obtain equation 8: Mi(n,m)=q1N(n,m)Nj(n,m)=q2N(n,m) \matrix{ {{M_i}\left( {n,m} \right) = {q_1} \otimes N\left( {n,m} \right)} \hfill \cr {{N_j}\left( {n,m} \right) = {q_2} \otimes N\left( {n,m} \right)} \hfill \cr }

Then compare the corresponding Mi(n, m) and Nj(n, m) (when i = j), and select the one with the largest value of each two as the candidate coefficient, as shown in equation 9: K(n,m)={ Mi(n,),Mi(n,m)Nj(n,m)Nj(n,m),Mi(n,m)<Nj(n,m) K\left( {n,m} \right) = \left\{ {\matrix{ {{M_i}\left( {n,} \right),{M_i}\left( {n,m} \right) \ge {N_j}\left( {n,m} \right)} \hfill \cr {{N_j}\left( {n,m} \right),{M_i}\left( {n,m} \right) < {N_j}\left( {n,m} \right)} \hfill \cr } } \right.

Finally, the scene entropy function and evaluation function are used to determine the final splicing coefficient, as shown in equation 10: Hω(n,m)=nmKn(n,m)ln[ Kn(n,m) ] {H_\omega }\left( {n,m} \right) = - \sum\limits_n {\sum\limits_m {{K_n}\left( {n,m} \right)\ln \left[ {{K_n}\left( {n,m} \right)} \right]} }

Compare the Hω(n, m) value of the high-frequency coefficient corresponding to the two images to be spliced, and take the largest one as the high-frequency coefficient of the spliced scene. This experiment is run on a computer with win7 system, Intel (R) core (TM) i3cpu, main frequency of 2.26GHz and memory of 4G. Matlab r2010b is used for programming simulation experiment. In order to verify the validity of the modified algorithm presented in this article, an experiment was conducted using a scene with a different focus. The scenes to be spliced are compared, simulated and analyzed with three traditional wavelet transform algorithms and the improved algorithm proposed in this paper, specifically, as shown in Figure 3 and Figure 4.

Figure 3

Traditional algorithm

Figure 4

Improved algorithm

Algorithm 1 uses weighted average rules for both high-frequency and low-frequency coefficients, algorithm 2 uses absolute maximum rules for both high-frequency and low-frequency coefficients, and algorithm 3 uses weighted average and absolute maximum rules for high-frequency and low-frequency coefficients respectively. After comparison and application, the stitching effect of the improved algorithm in this paper is more ideal. It does not have virtual shadow like the stitching scene of the traditional algorithm, but also improves the brightness and contrast, and the details are more complete. It can be seen that the numbers on the dial and the letters on the box are clearer than other methods. After subjectively evaluating the splicing scenes obtained by several methods, this paper introduces several common objective evaluation indexes to compare and analyze the splicing results:

Average gradient: the average gradient can reflect the clarity of the scene and reflect the variation characteristics of texture and the contrast of details in the scene. The larger its value is, the clearer the scene is.

Standard deviation: the standard deviation reflects the contrast of the scene. The greater the contrast, the clearer the stitched scene.

Spatial frequency: the spatial frequency measures the overall activity of the scene spatial domain. The larger the value, the better the splicing effect.

Sharpness: refers to the clarity of details and their boundaries on the image. The greater the sharpness value, the more information of the source scene has been transferred to the splicing result, so that the amount of scene information after splicing is richer.

In order to verify the smoothness of the movement connection of the animated characters in this system, an experiment was conducted using a video of a basketball player as an example. The number of video frames is 60, and every 10 frames is a segment. This system can not only make the motion segments have better smoothness at the connection, but also inherit the original motion details. In this paper, the connection effect of the system at the junction of the two segments is relatively smooth. Not only can you successfully connect the movements of the two segments, but you can also retain the details of the first movement and take over the first movement data. Experiments show that the connection effect of each part of the system is relatively smooth. In order to further verify the realistic effect of the system, multiple motion segments of the heel joint are selected to compare the connection effects of the three systems. The systems herein are relatively smooth at the motion segment connections, but the remaining two systems are not sufficiently smooth at the motion segment connections and may perfectly show the behavior of the entire motion segment. Can not. The connection of motion clips is not smooth enough, which will produce flashing effect in the process of playing 3D animation, resulting in the action of 3D animation characters is not realistic enough. Experiments show that the system can connect the motion segments well and make the action effect of 3D animation characters more realistic.

Conclusion

This paper presents an improved wavelet transform animation scene integration algorithm. In the splicing rule, the high frequency coefficients are calculated using two laplacian templacian operators, respectively, and the results are compared and selected to preserve a relatively large result. Finally, the scene entropy function is obtained as the definition evaluation function. This method effectively improves the quality of spliced scenes; The low-frequency coefficient is processed by the splicing rule based on the combination of Laplace sharpness evaluation function and 8-neighborhood local variance, and then the final splicing scene is obtained by inverse wavelet transform. Through comparative analysis, the algorithm proposed in this paper is superior to the traditional algorithm in both subjective and objective evaluation and obtained good combination effect. The stitching scene has the advantages of rich edge information and high scene definition, which verifies the effectiveness of the improved algorithm.

Figure 1

Overall framework of 3D animation system
Overall framework of 3D animation system

Figure 2

Flowchart of the animation model
Flowchart of the animation model

Figure 3

Traditional algorithm
Traditional algorithm

Figure 4

Improved algorithm
Improved algorithm

Chen H, Zhou H, Y Rao. Source Wavefield Reconstruction in Fractional Laplacian Viscoacoustic Wave Equation-Based Full Waveform Inversion[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, PP(99):1–1. ChenH ZhouH RaoY Source Wavefield Reconstruction in Fractional Laplacian Viscoacoustic Wave Equation-Based Full Waveform Inversion [J]. IEEE Transactions on Geoscience and Remote Sensing 2020 PP 99 1 1 10.1109/TGRS.2020.3029630 Search in Google Scholar

Abdellaoui B, AJ Fernández. Nonlinear fractional Laplacian problems with nonlocal ‘gradient terms’[J]. Proceedings of the Royal Society of Edinburgh: Section A Mathematics, 2020, 150(5):2682–2718. AbdellaouiB FernándezAJ Nonlinear fractional Laplacian problems with nonlocal ‘gradient terms’ [J] Proceedings of the Royal Society of Edinburgh: Section A Mathematics 2020 150 5 2682 2718 10.1017/prm.2019.60 Search in Google Scholar

Li D, Yin J. Paracontact Metric κ, μ-Manifold Satisfying the Miao-Tam Equation[J]. Advances in Mathematical Physics, 2021, 2021(6):1–5. LiD YinJ Paracontact Metric κ, μ-Manifold Satisfying the Miao-Tam Equation [J] Advances in Mathematical Physics 2021 2021 6 1 5 10.1155/2021/6687223 Search in Google Scholar

Ding H, Zhou J. Comments on “Blow-up and decay for a class of pseudo-parabolic p-Laplacian equation with logarithmic nonlinearity” [Comput. Math. Appl. 75(2) (2018) 459–469][J]. Computers & Mathematics with Applications, 2021, 84(2):144–147. DingH ZhouJ Comments on “Blow-up and decay for a class of pseudo-parabolic p-Laplacian equation with logarithmic nonlinearity” [Comput. Math. Appl. 75(2) (2018) 459–469] [J]. Computers & Mathematics with Applications 2021 84 2 144 147 10.1016/j.camwa.2020.12.008 Search in Google Scholar

Sahu A, Priyadarshi A. EXISTENCE OF MULTIPLE SOLUTIONS OF A p-LAPLACIAN EQUATION ON THE SIERPIN′ SKI GASKET[J]. Acta Applicandae Mathematicae, 2020, 168(1):169–186. SahuA PriyadarshiA EXISTENCE OF MULTIPLE SOLUTIONS OF A p-LAPLACIAN EQUATION ON THE SIERPIN′ SKI GASKET [J] Acta Applicandae Mathematicae 2020 168 1 169 186 10.1007/s10440-019-00283-z Search in Google Scholar

Ma L. On the Poisson equation of p-Laplacian and the nonlinear Hardy-type problems[J]. Zeitschrift für angewandte Mathematik und Physik, 2021, 72(1):1–8. MaL On the Poisson equation of p-Laplacian and the nonlinear Hardy-type problems [J] Zeitschrift für angewandte Mathematik und Physik 2021 72 1 1 8 10.1007/s00033-020-01465-8 Search in Google Scholar

Zheng S, Li F. Dynamic Properties of the p-Laplacian Reaction–Diffusion Equation in Multi-dimensional Space[J]. Qualitative Theory of Dynamical Systems, 2021, 20(2):1–15. ZhengS LiF Dynamic Properties of the p-Laplacian Reaction–Diffusion Equation in Multi-dimensional Space [J] Qualitative Theory of Dynamical Systems 2021 20 2 1 15 10.1007/s12346-021-00494-6 Search in Google Scholar

Alves C O, Boudjeriou T. Existence of solution for a class of heat equation involving the p (x) Laplacian with triple regime[J]. Zeitschrift für angewandte Mathematik und Physik, 2021, 72(1):1–18. AlvesC O BoudjeriouT Existence of solution for a class of heat equation involving the p (x) Laplacian with triple regime [J] Zeitschrift für angewandte Mathematik und Physik 2021 72 1 1 18 10.1007/s00033-020-01430-5 Search in Google Scholar

Yuan L, Li P. Symmetry and Monotonicity of a Nonlinear Schrdinger Equation Involving the Fractional Laplacian[J]. The Bulletin of the Malaysian Mathematical Society Series 2, 2021(3):1–17. YuanL LiP Symmetry and Monotonicity of a Nonlinear Schrdinger Equation Involving the Fractional Laplacian [J]. The Bulletin of the Malaysian Mathematical Society Series 2 2021 3 1 17 Search in Google Scholar

Amiri N, Zivari-Rezapour M. Maximization and minimization problems related to an equation with the p-Laplacian[J]. Indian Journal of Pure and Applied Mathematics, 2020, 51(2):777–788. AmiriN Zivari-RezapourM Maximization and minimization problems related to an equation with the p-Laplacian [J] Indian Journal of Pure and Applied Mathematics 2020 51 2 777 788 10.1007/s13226-020-0430-8 Search in Google Scholar

Aghili A. arman.aghili@gmail.com University of Guilan, Faculty of Mathematical Sciences, Department of Applied Mathematics, Iran-Rasht, P.O.BOX 1841. Complete Solution For The Time Fractional Diffusion Problem With Mixed Boundary Conditions by Operational Method[J]. Applied Mathematics and Nonlinear Sciences, 2021, 6(1):9–20. AghiliA arman.aghili@gmail.com University of Guilan, Faculty of Mathematical Sciences, Department of Applied Mathematics, Iran-Rasht, P.O.BOX 1841 Complete Solution For The Time Fractional Diffusion Problem With Mixed Boundary Conditions by Operational Method [J]. Applied Mathematics and Nonlinear Sciences 2021 6 1 9 20 10.2478/amns.2020.2.00002 Search in Google Scholar

Birindelli I, Galise G. Allen-Cahn equation for the truncated Laplacian: unusual phenomena[J]. Mathematics in Engineering, 2020, 2(4):722–733. BirindelliI GaliseG Allen-Cahn equation for the truncated Laplacian: unusual phenomena [J] Mathematics in Engineering 2020 2 4 722 733 10.3934/mine.2020034 Search in Google Scholar

Rao S N, Alesemi M. article title: existence of positive solutions for a systems of nonlinear fractional diifferential equation with p-laplacian existence of positive solutions for a systems of nonlinear fractional differential equation with p-laplacian[J]. Asian-European Journal of Mathematics, 2020, 13(05):5–719. RaoS N AlesemiM article title: existence of positive solutions for a systems of nonlinear fractional diifferential equation with p-laplacian existence of positive solutions for a systems of nonlinear fractional differential equation with p-laplacian [J] Asian-European Journal of Mathematics 2020 13 05 5 719 Search in Google Scholar

Apdl R A, Fjg A, Cr B. On the robust solution of an isogeometric discretization of bilaplacian equation by using multigrid methods[J]. Computers & Mathematics with Applications, 2020, 80(2):386–394. ApdlR A FjgA CrB On the robust solution of an isogeometric discretization of bilaplacian equation by using multigrid methods [J] Computers & Mathematics with Applications 2020 80 2 386 394 10.1016/j.camwa.2019.08.011 Search in Google Scholar

Bucur R, Breaz D. Properties of a New Subclass of Analytic Functions With Negative Coefficients Defined by Using the Q-Derivative[J]. Applied Mathematics and Nonlinear Sciences, 2020, 5(1):303–308. BucurR BreazD Properties of a New Subclass of Analytic Functions With Negative Coefficients Defined by Using the Q-Derivative [J] Applied Mathematics and Nonlinear Sciences 2020 5 1 303 308 10.2478/amns.2020.1.00028 Search in Google Scholar

Articles recommandés par Trend MD

Planifiez votre conférence à distance avec Sciendo