1. bookAHEAD OF PRINT
Detalles de la revista
License
Formato
Revista
eISSN
2444-8656
Primera edición
01 Jan 2016
Calendario de la edición
2 veces al año
Idiomas
Inglés
Acceso abierto

The application of directional derivative in the design of animation characters and background elements

Publicado en línea: 23 Dec 2022
Volumen & Edición: AHEAD OF PRINT
Páginas: -
Recibido: 07 Jun 2022
Aceptado: 16 Aug 2022
Detalles de la revista
License
Formato
Revista
eISSN
2444-8656
Primera edición
01 Jan 2016
Calendario de la edición
2 veces al año
Idiomas
Inglés
Introduction

As the main content of animation creation, animation characters are an important factor to attract the audience, which affects the creation quality of the whole animation work [1, 2]. The rise of the animation industry has gradually become clear, so the animation character design market has also been promoted, involving animation derivatives, games, advertisements, posters, mascots, etc., and the commercial value of animation characters has gradually increased [3, 4]. The characters in animation works, like the actors in film and television dramas, undertake the role of interpreting the content of the story, promoting the development of the plot, highlighting the character of the characters and forming the style of the film [57]. Excellent animated images can attract audiences of different ages. With time, audiences may gradually forget the plots of animation works, but animated characters with unique personality charm and modelling characteristics often leave a deep impression on audiences [810]. Excellent animation characters convey a correct outlook on life, world outlook and values by their pleasing modelling design, smart, humorous and optimistic personality characteristics [11, 12]. To a certain extent, it has the role of ideological inspiration for the audience [1315]. There are many problems in animation creation. The application of directional derivatives and visual restoration can well solve the problems caused by animation characters during construction and can perform theoretical calculations and modelling more efficiently [16].

Video repair is the process of repairing the incomplete area of picture information, mainly to restore the information contained in the primitive itself. In today’s society where data technology is becoming more and more advanced, picture information has also become one of the important means of signal transmission for human beings. Video inpainting technology can be based on computer-aided geometric feature inpainting and texture-based video inpainting (completion) in two directions [17, 18]. Video enhancement is one of the key technologies in animation design and background element design. Due to the instability of the scene, inaccurate internal parameter values of the computer, equipment damage, etc., uneven lighting, colour distortion, and overlapping areas may appear in the creation caused by visual degradation. Under the higher requirements of human beings for visual quality, the contrast of the video occupies an important position. The greater the contrast, the better the visual effect of the video, and the higher the clarity of details [19, 20].

Compared with the above-mentioned simulation application of video texture and primitive repair, this paper will apply the theory of directional derivative to the design of animation characters and background elements. According to the information positioning of the directional derivative, the stable field model and equation of the local texture of the static video can be established, and the definition and information entropy can be obtained by calculating each known pixel point [21,22]. This can be now be compared with known algorithms, and then the realistic degree and effect of animation background elements under the directional derivative theory are evaluated [23,24]. In addition, the directional derivative will be calculated in the background based on the restoration time and the peak noise reduction ratio, and the accurate image restoration parameters will be obtained through research, which will confirm the feasibility of the directional derivative in the design of animation characters and background elements from the side [25, 26].

Introduction to directional derivatives

The knowledge of partial derivatives lets us know that the partial derivatives actually reflect the rate of change of the multivariate function at a point along the coordinate axis. Many problems in life involve solving the rate of change of a multivariate function at a point in other directions. Under this phenomenon, the concept of the directional derivative of a multivariate function at a point along a given direction is proposed.

The most widely admired is ‘Advanced Mathematics’ printed by Tongji University Press. The book takes the binary function z=f(x,y) as an example, the set point is P0(x0,y0), the direction is l=(cosα,cosβ), and α and β are the directions of the vector l horn.

When the moving length of the moving point P(x,y) along the l direction is ρ, the function increment is f(P)f(P0)=f(x0+ρcosα,y0+ρcosβ)f(x0,y0), and the moving distance of the moving point is ρ=|P0P|. If the following limit exists, the directional derivative of z=f(x,y) at the point (x0,y0) along the direction l is defined as this limit.

$${\left. {{{\partial f} \over {\partial l}}} \right|_{({x_0},{y_0})}} = \mathop {\lim }\limits_{\rho \to {0^ + }} {{f(P) - f({P_0})} \over {|{P_0}P|}} = \mathop {\lim }\limits_{\rho \to {0^ + }} {{f({x_0} + \rho \cos \alpha ,{y_0} + \rho \cos \beta ) - f({x_0},{y_0})} \over \rho }$$ fl|(x0,y0)=limρ0+f(P)f(P0)|P0P|=limρ0+f(x0+ρcosα,y0+ρcosβ)f(x0,y0)ρ

This article will study the connection between it and the design of animation elements according to the modified part.

The current rules and requirements for the design of animated characters and background elements

Animation character design is defined when the character designer analyses the character’s shape and character’s characteristics by refining the character information in the script and related materials, and then draws the character’s appearance, expression, siblings, movements, clothing, etc. The production activity lays the groundwork. In the production process of animation works, character design is an important link, and the character image directly affects the style of the work and the expression of the plot. It can transmit visual information to the audience for the first time and promote the development of the story. In animation works, there are usually several animation characters. When designing, we should fully consider the character’s characteristics of different characters and their effects on the development of the storyline, to produce animation characters with different characteristics and improve the audio-visual effect of animation works. Excellent animation character design should follow certain principles, which also play an important role in the development of animation derivatives and stand guarantee for commercial benefits.

The design of animation characters should have the following ideas:

Aesthetics

Visual effects are one of the important criteria to measure the quality of animation works. To enjoy the visual enjoyment of animation works, audiences need to have beautiful pictures, which include pleasing animation characters.

Simplicity

The concise and lively character image helps to deepen the audience’s memory, and is also conducive to the development of animation derivative products.

Unique

The character design of animation should have individuality, and it will have more appeal and vitality.

Harmony

The animation character design cannot be separated from the storyline and scene design, and the three must be consistent.

When designing character actions, first of all, there must be a clear purpose. The design is based on the overall style and character’s characteristics of animation works, follows the laws of motion, and uses the methods of elasticity, inertia and curve motion to make moderate exaggeration. Scientific and reasonable character pose design, which can be the key to shaping the character and unfolding the plot. Second, attention to details is imperative and needs to be paid. When designing a character’s action pose, a look, a smile, a frown, a widened nostril, clenched teeth, etc., are all detailed actions that express the character’s emotion and personality. The detailed action and posture designed based on the character’s character can make the character fresher, more infectious, and easy to resonate with the audience. Character designers should observe carefully in life, pay attention to accumulation, create creations according to the plot and character characteristics, and pursue better design effects.

The character pedigree of animation work is drawn after the design work of all the appearing characters is completed, and all characters are placed on the same horizon on the design drawing, reflecting the high degree of change and contrast between each character. For original artists and animators, character lineage is to define the relative proportions between different characters to ensure that when multiple characters appear in the same scene, the correct height ratio and body shape can be maintained between characters. This puts forward a stricter requirement for portraits, especially in terms of the clarity and fidelity of imaging, and in close-up shots, it is not allowed to have confusion in the structure and proportions between characters. Therefore, this requires original artists and animators to draw characters strictly under the character pedigree settings to ensure the accuracy and rigour of animation works.

There are many types of software for animation character creation. The software includes two categories: plane animation production software and 3D animation production software. For details, see Figure 1:

Fig. 1

Classification of animation character creation software

Video enhancement and restoration application of directional derivatives in the design of animation characters and background elements
Video enhancement applications of directional derivatives
Video enhancement theory of directional derivatives

In recent years, researchers have extended the application field of directional derivatives into the design and processing of animation characters and scene elements so that the application research of directional derivatives has a new development direction and space. Mathematical variables are used to monitor and control the key parameters of the image in real time, to enhance the video of animations and scenes so that the edge of the image is easier for everyone to identify. At the same time, the texture information of the smooth area of the image is preserved. The G–L (Grümwald–Letnikov) fractional image enhancement method is used in the literature to solve the problem that the effect of integer order differentiation on image texture enhancement is not obvious, and the edge colour of RGB colour images will be distorted after differentiation. Combining the G–L (Grümwald–Letnikov) fractional differential with the directional derivative, the adjacent pixel information of the processed pixels can be fully utilised, and the edge information and texture detail information of the image can be significantly enhanced.

Caputo and G–L definitions are equivalent under the condition that function f(t) satisfies f(k)(a), k=0,1,2,,n1, and function f(t) has m+1 order continuous derivative and m takes at least; otherwise, they are not equivalent. It can be seen from the above analysis that the Caputo definition is an improvement of the G–L definition alone, and the use of the Cauchy integral formula is not considered to be used due to the complex calculation process, poor timeliness and error-prone calculation in large-scale data sets. The G–L definition is mainly used to calculate the analytical solutions of some simpler functions; the Caputo definition is suitable for the analysis of the initial value problem of fractional differential equations, as well as the application in the engineering field. The G–L definition can approximate the function as the weighted summation of the displacement of the function and can be easily rewritten as the convolution operation of the function and the corresponding weight, so it is very suitable for application in the signal field. In this paper, the G–L definition of fractional calculus is mainly used and compared with the directional derivative to measure the ability of the directional derivative in video enhancement.

Derivation of video enhancement formula of direction derivative
Directional derivative

In the directional derivative of this part, redefine the directional function mentioned in the second part, then there is, let z=f(x,y) be defined in the neighbourhood D of the point M(x,y), draw line l from point M. Let the angle of rotation from the positive x-axis to ray l be α, and let Mo(x+Δx,y+Δy) be another point on l and M0D, as shown in Figure 2.

Fig. 2

Schematic diagram of directional derivative theory

According to the correlation characteristics of directional derivatives, z=f(x,y) can be derived at point M(x,y), then z=f(x,y) at point M has directional derivatives along any direction l, inferred f(x,y)l=f(x,y)xcosα+f(x,y)ysinα

Similarly, if f(x,y) has continuous k-order partial derivatives in D, the k-order directional derivative of the function f(x,y) along the direction l at point M is defined as: $${{{\partial ^k}f} \over {\partial {l^k}}} = {\left( {\cos \alpha {\partial \over {\partial x}} + \sin \alpha {\partial \over {\partial y}}} \right)^k}f(x,y)$$ kflk=(cosαx+sinαy)kf(x,y)

Among them, k=1,2,,n,n+1.

Definition of G–L Fractional Calculus

The G–L fractional derivative is defined as: $$_a^GD_t^vf(t) = \mathop {\lim }\limits_{h \to 0} {1 \over {{h^v}}}\mathop \sum \limits_{m = 0}^{\left[ {{{t - a} \over h}} \right]} {( - 1)^m} \times {{{\rm{\Gamma }}(v + 1)} \over {m!{\rm{\Gamma }}(v - m + 1)}} \times f(t - mh)$$ aGDtvf(t)=limh01hvm=0[tah](1)m×Γ(v+1)m!Γ(vm+1)×f(tmh)

Let’s analyse it here. The meaning of the left superscript in aGDtv is G–L; v represents the differential degree of the function; a represents the lower boundary; t represents the upper boundary, that is, a is the initial value of t. Gamma function Γ(n)=0ettn1dt=(n1)!, h is the differential step size, $\left[ {{{t - a} \over h}} \right]$ [tah] symbol represents the integer in tah. The duration of f(t) refers to t[ta]. If it is evenly distributed according to the step size h=1, $m = {\left[ {{{t - a} \over h}} \right]^{h = 1}} = [t - a]$ m=[tah]h=1=[ta] is obtained. The difference formula for deriving the ovrder differential of f(t) is: dvf(t)dtnf(t)+(v)f(t1)+(v)(v+1)2f(t2)++Γ(v+1)m!Γ(v+m+1)f(tm)

Let H be a variable function, which is defined to represent the change of function f(x,y) in direction l. To specify the amount of change of the function f(x,y) along a certain direction E at any point M(x,y), the fractional differential can be innovatively derived as: 0GDtvf(x0,y0)=n=01n!nfln0GDtvHn+1(n+1)!n+1f(x+θΔx,y+θΔx)ln+10GDtvHn+1 where 0<θ<l, the above formula is called the fractional directional derivative along the direction l at point M(x,y).

Video enhancement effect and analysis of direction derivative

To make the pixels on the image boundary to be processed, it is necessary to calculate the processed image correspondingly in the computer-aided measurement system, and explore the enhancement index that can realise the image. Since the differential order of the fractional differential calculation can be adjusted, different fractional orders are used to process the image so that the better order of enhancement effect can be selected. To verify the superiority of the directional derivative, we compare the Laplacian method, the G–L fractional differential method and the directional derivative itself. Laplacian is the mainstream practical application method of image enhancement, but in most cases, the enhanced image is under micro-vision. It will still look rough, not only lost more detail information, but also has a lot of noise, the enhancement effect is not good. And the algorithm processing of the dark part of the image is not very effective, especially the texture part. However, the G–L fractional differential methods have enhanced it, and the enhancement amplitude is slightly smaller, and there is no sharpening phenomenon. However, the enhancement effect is still not obvious; the enhancement effect of the directional derivative is better, but there is a slight blurring phenomenon at the edge of the image; when the directional derivative is used as a parameter aid, the contrast of the overall grey level of the animated character and the background image is significantly increased, and the local details Obviously enlarged, the texture is clearly visible. The experimental results show that the directional derivative has the best enhancement effect when k = 2–4, and k 3 is selected in this paper. Therefore, in practical applications, the method in this paper can appropriately select the value of k according to the degree of image enhancement, to achieve the optimal situation of the image under the assistance of directional derivative parameterisation.

The quality of image enhancement in this paper is evaluated in a quantitative direction. Quantitative methods are used to evaluate the enhancement effect of the image to reflect the change of the grey value of the image and finally, conduct a comprehensive evaluation. Let xin be the original image, xout be the enhanced result image and (M,N) be the size of the image, there are:

Clarity

S=1MNi=1Mj=1N(ΔIx(i,j))2+(ΔIy(i,j))2

Among them, ΔIx(i,j)=Xout(i,j)Xout(i1,j) ΔIy(i,j)=Xout(i,j)Xout(i,j1)

Clarity is the characteristic of the contrast unit of the tiny details of the video.

Information entropy

Let the video grayscale set be {d0,d1,,di1}, its corresponding probability is expressed as {p0,p1,, pi1}, and L is the grayscale set. The image histogram of the grayscale range {0,1,,L1}, and its entropy H is as follows: H=i=0L1piln(pi)

The larger the H entropy value is, the more information factors the image has, so the image entropy value is a measure of the richness of the information contained in the image.

The differential order of the G–L fractional differential operator can be adjusted, and the image is processed accordingly, and a better order of enhancement effect is reserved. To verify the superiority of the method in this paper, we analyse and compare the Laplacian method, G–L fractional derivative method and directional derivative (k = 2) method, the test results are shown in Table 1.

Information entropy and clarity of various enhancement methods

To enhance the imageMethod
LaplacianG–LDirectional derivative (k=2)
Image 1S0.23056.557412.0354
H1.04557.56387.8928
Image 2S0.17885.43255.9807
H1.00226.87257.0901
Image 3S0.448210.107514.8263
H1.02286.58267.0304

It can be seen from Table 1 that the sharpness and information entropy of the enhanced image obtained by the Laplacian method are relatively low, resulting in a poor visual effect of the enhanced image; the G–L method has a small enhancement range, resulting in low image sharpness. The derivative (application of the method discussed in this paper) enhances the image with the highest clarity and the largest information entropy, indicating that this method has the best enhancement effect on the image. To sum up, the directional derivative proposed in this paper makes full use of the information of the surrounding pixels. It can be seen from both subjective evaluation and objective evaluation methods that the directional derivative has a significant effect on image enhancement. It is deduced from this that in the design that requires a higher quality of video processing technology for animation characters and background elements, the directional derivative can bring a better effect.

Video repair application of directional derivatives
Derivation of vision repair formula for direction derivatives

At present, the research of image inpainting technology is more popular in texture special effects and set feature video inpainting, which is mainly used for the reconstruction of small defect areas, and most of them use the calculation mode of the partial differential equation (PDE) solution. The edge information of the defect region is diffused from the region boundary to the boundary, and the diffusion information and diffusion direction are determined by the edge information. Its representative algorithms are: the BSCB model, and the curvature diffusion repair algorithm (CDD model). In addition to the different mathematical models used in the above algorithms, the main factors affecting the restoration effect are the selection of diffusion information and the control of diffusion direction. In general, these methods aim to make the overall effect of the restored image imperceptible to the human eye and focus on the overall visual effect, rather than on the reconstruction of the accurate value of a certain point, which also makes them difficult to detect. The clarity of edge and texture detail repair is not deliberately pursued. In addition, because this type of repair model requires repeated iterations and high-order partial derivative calculations, it will affect its repair efficiency. Through the image texture itself and the image texture formation mechanism. The video inpainting based on directional derivatives is proposed first.

Derivation of vision repair formula for direction derivatives

The directional derivative of a scalar field describes the magnitude of the rate at which the scalar function changes in a certain direction in the field. As shown in Figure 2, for the image scalar field with defects, let I0(ri) be the function describing the texture of the known area of the image, ri be the coordinates of any point in the field, l be any ray from ri in the field, and r be the adjacent l on ri, and the distance between the two points is Δl, then the directional derivative of I0(ri) at point ri along the l direction is defined as: $${{\partial {I_0}({r_i})} \over {\partial l}} = \mathop {\lim }\limits_{{\rm{\Delta }}l \to 0} {{{I_0}(r) - {I_0}({r_i})} \over {{\rm{\Delta }}l}}$$ I0(ri)l=limΔl0I0(r)I0(ri)Δl

Therefore, when Δl is extremely small, we have I0(r)ΔlI0(ri)l+I0(ri)

The ray drawn from point ri along the normal direction of the equipotential line, in this direction, the scalar field I0(ri) has the maximum rate of change, which is the gradient of I0(ri) at point ri.

Since the influence of points far away is too small to be ignored, when actually reconstructing a defective pixel point, the area with a distance not greater than 5 from the target reconstruction point is taken as the effective area, only considering the effects of points within the area. Definition: $${I_0}({r_i}) = \left\{ {\matrix{ {\varphi ({r_i})} \hfill & {{r_i}\>{\rm{as}}\>{\rm{known}}} \hfill \cr 0 \hfill & {{r_i}\>{\rm{as}}\>{\rm{unknown}}} \hfill \cr } } \right.$$ I0(ri)={φ(ri)riasknown0riasunknown

Let A represent the gradient of the pixel at point B, and C represent the directional derivative of point D in the direction of E. Figure 3 is a randomly selected point, and these quantities are marked. On the premise that F satisfies the appropriate order of magnitude (take the normalised value) available

Let I0(ri) represent the gradient of the pixel at the point ri, and I0(ri) represent the directional derivative of the point ri in the direction of (r,ri). Figure 3 is a randomly selected point, and these quantities are marked. On the premise that |r=ri| satisfies the appropriate order of magnitude (take the normalised value), available: I^(r,ri)=I0(r)I0(ri)+I0(ri)×|rri|

In the formula, I^(r,ri) represents the estimated value of I(r) only under the influence of point ri, and I(ri)=|I0(ri)|cosθ

In the formula, θ is the angle between the gradient I0(ri) and the vector (r,ri), available: $$\hat I(r,{r_i}) = {I_0}({r_i}) + |{\nabla _0}({r_i})| \times |r - {r_i}| \times \cos \theta = {I_0}({r_i})\left( {1 + {{|{I_0}({r_i})| \times |r - {r_i}| \times \cos \theta } \over {{I_0}({r_i})}}} \right)$$ I^(r,ri)=I0(ri)+|0(ri)|×|rri|×cosθ=I0(ri)(1+|I0(ri)|×|rri|×cosθI0(ri))

The effect and analysis of the visual restoration of the directional derivative

After the above parameters are calculated and processed mathematically, Table 1 lists the repair or reconstruction time and peak signal-to-noise ratio (PSNR) of the above three algorithms. It is convenient to compare the impact of image size on the repair or reconstruction time of several algorithms; at the same time, Image 1 and Image 4 are destroyed respectively, and the number of defective pixels in each image is quite different, so as to compare the impact of the size of the defect area on the repair and reconstruction time of each algorithm.

It can be seen from Table 2 that the repair time of the BSCB algorithm is mainly affected by the size of the image itself; the repair time of the CDD algorithm is greatly affected by the size of the image itself and the size of the defect area; while the repair time of the algorithm in this paper is almost completely determined by the size of the defect area. A horizontal comparison of the repair time in the table shows that the image scalar directional derivative repair efficiency is higher than the previous two algorithms. The larger the image size, the more obvious the advantage of this algorithm in terms of efficiency. From the perspective of PSNR, the scalar directional derivative algorithm is obviously better than the other ones. The algorithm in this paper and the BSCB and CDD methods have their own advantages for different types of images and different types of damage, but they are not as good as the image scalar directional derivative. To sum up, the image scalar directional derivative and the first two algorithms have better overall visual effects on image reconstruction or restoration; since the reconstruction model in this paper focuses on the reconstruction of each defective pixel point, in the reconstruction process, each defective point is reconstructed. All calculations are performed, so that the algorithm in this paper can reconstruct the edge and texture details more clearly and accurately; but because the calculation accuracy of the gradient and the influence function is not enough, the reconstruction accuracy of the edge details by the algorithm in this paper is not high enough, and the edge cannot be completely and accurately reconstructed. Since the reconstruction model in this paper avoids repeated iterations and calculation of high-order partial derivatives, it has high efficiency, especially for large-size images, the reconstruction time advantage of image scalar directional derivatives is obvious.

Comparison of repair time and PSNR of several algorithms. PSNR, peak signal-to-noise ratio

PatternThe total number of pixels of the imageDefect pixelsRepair time/sPSNR/dB
BSCBCDDdirectional derivativeBSCBCDDdirectional derivative
Image 1128*128366538236.838.246.9
Image 2512*51224055828764547.336.8
Image 3686*1024*3593338911671545.546.842.5
Conclusion

Through the specific application research of the directional derivative in video enhancement and video restoration, it can be known that its directional derivative is also suitable for the design of animation characters and background elements, and can be more prominent and better than the traditional algorithm. The directional derivative can improve the clarity of the video and the amount of information carried by a higher level, which is based on video enhancement; the directional derivative can also be repaired in the case of video damage, and the repair time and the PSNR should be high-quality and outstanding compared with the comparison object algorithm. Based on the above description, the conclusions can be summarised as follows:

In the aspect of video enhancement, traditional Laplacian, G–L fractional derivative and directional derivative can achieve better video visualisation effect, but in terms of quantitative analysis, whether it is the result of clarity and entropy, the research value of the directional derivative is the same. It is higher than the other two methods, indicating that the directional derivative has an excellent ability to enhance the image.

In terms of image restoration, when dealing with higher pixel images, the BSCB model, the curvature diffusion restoration algorithm CDD model, and the directional derivative model show that the restoration time and PSNR values of the two index values show that the directional derivative has the shortest restoration time among the three. The PSRN value is the smallest, and it maintains a good repair and shaping force for the texture details and edge construction of the video.

Directional derivatives have practical applications in both areas of visual enhancement and restoration in the design of animated characters and background elements. And play an important guiding role in the design of the rising national comics and the design of scene background elements. Especially in the function of texture filling and realistic characterisation and self-healing of edge details, the algorithm principle of directional derivative can bring higher efficiency.

Fig. 1

Classification of animation character creation software
Classification of animation character creation software

Fig. 2

Schematic diagram of directional derivative theory
Schematic diagram of directional derivative theory

Information entropy and clarity of various enhancement methods

To enhance the image Method
Laplacian G–L Directional derivative (k=2)
Image 1 S 0.2305 6.5574 12.0354
H 1.0455 7.5638 7.8928
Image 2 S 0.1788 5.4325 5.9807
H 1.0022 6.8725 7.0901
Image 3 S 0.4482 10.1075 14.8263
H 1.0228 6.5826 7.0304

Comparison of repair time and PSNR of several algorithms. PSNR, peak signal-to-noise ratio

Pattern The total number of pixels of the image Defect pixels Repair time/s PSNR/dB
BSCB CDD directional derivative BSCB CDD directional derivative
Image 1 128*128 366 5 38 2 36.8 38.2 46.9
Image 2 512*512 2405 58 287 6 45 47.3 36.8
Image 3 686*1024*3 5933 389 1167 15 45.5 46.8 42.5

Wu C M. Studies on Mathematical Model of Histogram Equalization[J]. Acta Electronica Sinica, 2013, 41(3):598-602. Wu C M. Studies on Mathematical Model of Histogram Equalization[J]. Acta Electronica Sinica, 2013, 41(3):598-602.Search in Google Scholar

Ramlugun G S, Nagarajan V K, Chakraborty C. Small retinal vessels extraction towards proliferative diabetic retinopathy screening[J]. Expert Systems with Applications, 2012, 39(1):1141-1146. Ramlugun G S, Nagarajan V K, Chakraborty C. Small retinal vessels extraction towards proliferative diabetic retinopathy screening[J]. Expert Systems with Applications, 2012, 39(1):1141-1146.10.1016/j.eswa.2011.07.115Search in Google Scholar

Zhang L, Huang F P, Zheng E R. Image Enhancement Based on Rough Sets and Wavelet Unsharp Masking[J]. Acta Photonica Sinica, 2008, 37(6):1285-1288. Zhang L, Huang F P, Zheng E R. Image Enhancement Based on Rough Sets and Wavelet Unsharp Masking[J]. Acta Photonica Sinica, 2008, 37(6):1285-1288.Search in Google Scholar

Bhutada G G, Anand R S, Saxena S C. Edge preserved image enhancement using adaptive fusion of images denoised by wavelet and curvelet transform[J]. Digital Signal Processing, 2011, 21(1): 118-130. Bhutada G G, Anand R S, Saxena S C. Edge preserved image enhancement using adaptive fusion of images denoised by wavelet and curvelet transform[J]. Digital Signal Processing, 2011, 21(1): 118-130.10.1016/j.dsp.2010.09.002Search in Google Scholar

Kiragu H, Mwangi E. An improved enhancement of degraded binary text document images using morphological and single scale retinex operations[C]//Image Processing. IET, 2012: 112-112. Kiragu H, Mwangi E. An improved enhancement of degraded binary text document images using morphological and single scale retinex operations[C]//Image Processing. IET, 2012: 112-112.10.1049/cp.2012.0420Search in Google Scholar

Ayaou T, Boussaid M, Afdel K, et al. Enhancing road signs detection rate using Multi-Scale Retinex[C]//Multimedia Computing and Systems (ICMCS), 2012 International Conference on. 2012. Ayaou T, Boussaid M, Afdel K, Enhancing road signs detection rate using Multi-Scale Retinex[C]//Multimedia Computing and Systems (ICMCS), 2012 International Conference on. 2012.10.1109/ICMCS.2012.6320208Search in Google Scholar

Wang S J, Ding X H, Liao Y H, et al. A Novel Bio-inspired Algorithm for Color Image Enhancement[J]. Acta Electronica Sinica, 2008. Wang S J, Ding X H, Liao Y H, A Novel Bio-inspired Algorithm for Color Image Enhancement[J]. Acta Electronica Sinica, 2008.Search in Google Scholar

Infrared Image Enhancement Method Based on Stationary Wavelet Transformation and Retinex[J]. Acta Optica Sinica, 2010, 30(10):2788-2793. Infrared Image Enhancement Method Based on Stationary Wavelet Transformation and Retinex[J]. Acta Optica Sinica, 2010, 30(10):2788-2793.10.3788/AOS20103010.2788Search in Google Scholar

Hong G, Zhang Q. Improved Morphological Watershed Algorithm to Enhance Image Detail and Denoise Ability[J]. Journal of Graphics, 2013, 34(3):7-11. Hong G, Zhang Q. Improved Morphological Watershed Algorithm to Enhance Image Detail and Denoise Ability[J]. Journal of Graphics, 2013, 34(3):7-11.Search in Google Scholar

Cafagna, D. Fractional calculus: A mathematical tool from the past for present engineers [Past and present[J]. 2007, 1(2):35-40. Cafagna, D. Fractional calculus: A mathematical tool from the past for present engineers [Past and present[J]. 2007, 1(2):35-40.10.1109/MIE.2007.901479Search in Google Scholar

Fractional differential approach to detecting textural features of digital image and its fractional differential filter implementation[J], 2008, 51(9):21. Fractional differential approach to detecting textural features of digital image and its fractional differential filter implementation[J], 2008, 51(9):21.10.1007/s11432-008-0098-xSearch in Google Scholar

Gao C. Fractional Directional Differentiation and Its Application for Multiscale Texture Enhancement[J]. Mathematical Problems in Engineering, 2012. Gao C. Fractional Directional Differentiation and Its Application for Multiscale Texture Enhancement[J]. Mathematical Problems in Engineering, 2012.10.1155/2012/325785Search in Google Scholar

Lin K R. Analysis and Comparision of Different Definition About Fractional Integrals and Derivatives[J]. Journal of Fuzhou Teachers College, 2003. Lin K R. Analysis and Comparision of Different Definition About Fractional Integrals and Derivatives[J]. Journal of Fuzhou Teachers College, 2003.Search in Google Scholar

University S. Image Enhancement Based on Fractional Differentials[J]. Journal of Computer-Aided Design & Computer Graphics, 2008, 20(3):343-348. University S. Image Enhancement Based on Fractional Differentials[J]. Journal of Computer-Aided Design & Computer Graphics, 2008, 20(3):343-348.Search in Google Scholar

Guo H, Li X U, Yifei P U, et al. Summary of research on image processing using fractional calculus [J], 2012, 29(2):414-420. Guo H, Li X U, Yifei P U, Summary of research on image processing using fractional calculus [J], 2012, 29(2):414-420.Search in Google Scholar

Huang G, Chen Q L, Lib X U, et al. Realization of adaptive image enhancement with variable fractional order differential[J]. Journal of Shenyang University of Technology, 2012. Huang G, Chen Q L, Lib X U, Realization of adaptive image enhancement with variable fractional order differential[J]. Journal of Shenyang University of Technology, 2012.10.1109/CCDC.2012.6244164Search in Google Scholar

Bao P W. The Taylor Formula and Taylor Series and its Application[J]. Journal of Huaihua University, 2011. Bao P W. The Taylor Formula and Taylor Series and its Application[J]. Journal of Huaihua University, 2011.Search in Google Scholar

Zhang Y A, B Y F, Zhou J. Image enhancement masks based on fractional differential[J]. Application Research of Computers, 2012. Zhang Y A, B Y F, Zhou J. Image enhancement masks based on fractional differential[J]. Application Research of Computers, 2012.Search in Google Scholar

Liang D, Yin B, Yu M, et al. Image Enhancement Based on the Nonsubsampled Contourlet Transform and Adaptive Threshold[J]. Acta Electronica Sinica, 2008. Liang D, Yin B, Yu M, Image Enhancement Based on the Nonsubsampled Contourlet Transform and Adaptive Threshold[J]. Acta Electronica Sinica, 2008.Search in Google Scholar

Shen J, Chan T F. Mathematical Models for Local Nontexture Inpaintings[J]. SIAM Journal on Applied Mathematics, 2002, 62:1019-1043. Shen J, Chan T F. Mathematical Models for Local Nontexture Inpaintings[J]. SIAM Journal on Applied Mathematics, 2002, 62:1019-1043.10.1137/S0036139900368844Search in Google Scholar

Landau L, Lifschitz E. The classical theory of fields[J]. Physics Today, 1963, 16(6):72-73. Landau L, Lifschitz E. The classical theory of fields[J]. Physics Today, 1963, 16(6):72-73.10.1063/1.3050989Search in Google Scholar

Yang C S, Pollock L. Identifying Potentially Load Sensitive Code Regions for Stress Testing[J]. 1996. Yang C S, Pollock L. Identifying Potentially Load Sensitive Code Regions for Stress Testing[J]. 1996.Search in Google Scholar

Garousi V, Briand L C, Labiche Y. Traffic-aware stress testing of distributed real-time systems based on UML models using genetic algorithms[J]. Journal of Systems & Software, 2006, 81(2):161-185. Garousi V, Briand L C, Labiche Y. Traffic-aware stress testing of distributed real-time systems based on UML models using genetic algorithms[J]. Journal of Systems & Software, 2006, 81(2):161-185.10.22215/etd/2006-06634Search in Google Scholar

CH Fang, YN Tao, JG Eang, et al. Mapping Relation of Leakage Currents of Polluted Insulators and Discharge Arc Area[J]. Frontiers in Energy Research, 2021. Fang, CH Tao, YN Eang, JG Mapping Relation of Leakage Currents of Polluted Insulators and Discharge Arc Area[J]. Frontiers in Energy Research, 2021.10.3389/fenrg.2021.777230Search in Google Scholar

Provos N. A Virtual Honeypot Framework[C]//Proceedings of the 13th USENIX Security Symposium, August 9-13, 2004, San Diego, CA, USA. 2004. Provos N. A Virtual Honeypot Framework[C]//Proceedings of the 13th USENIX Security Symposium, August 9-13, 2004, San Diego, CA, USA. 2004.Search in Google Scholar

Che, H., & Wang, J. (2019). A collaborative neurodynamic approach to global and combinatorial optimization. Neural Networks, 114, 15-27. Che, H., & Wang, J. (2019). A collaborative neurodynamic approach to global and combinatorial optimization. Neural Networks, 114, 15-27.10.1016/j.neunet.2019.02.00230831379Search in Google Scholar

Artículos recomendados de Trend MD

Planifique su conferencia remota con Sciendo