Accesso libero

Research on traditional oil painting innovation and creation technology assisted by artificial intelligence algorithm

  
26 mar 2025
INFORMAZIONI SU QUESTO ARTICOLO

Cita
Scarica la copertina

Introduction

Oil painting originated in Europe in the 15th century, and the technique of oil painting was a revolutionary innovation in the medium of painting at the time. Artists began to use oil-based pigments, which have a long drying time and produce unique layers and depths of color, thus greatly expanding the possibilities of artistic expression [1-2]. At the heart of traditional oil painting techniques lies its precise control of color saturation and fine texture, thanks to the unique properties of oil-based pigments, which allow the artist to create rich visual effects and a sense of deep space through the technique of layering [3-5]. From the Renaissance’s perspective and detailed depiction of figures, to the dramatic expression of light and shadow and dynamics in the Baroque and Rococo periods, to the Impressionist’s improvisational grasp of light and color, the techniques and styles of oil paintings in each period profoundly reflect the cultural spirit and aesthetic tendencies of their times [6-9]. With the arrival of the big data era, the neural network architecture and the rapid development of computer science with high precision and all-round development, “artificial intelligence” has been successfully immersed in the field of art creation and successfully created works of art, and oil painting is not immune to the impact of artificial intelligence [10-13].

In modern art, the application of AI technology has covered a variety of fields and examples. Artificial intelligence can handle complex mathematical algorithms and visualize complex scientific concepts and data, thus providing new sources of inspiration for artists [14-16]. The fusion of the two not only demonstrates a dialog between oil painting creation and science, but also provides viewers with a new way of understanding and appreciating oil paintings, so that they are no longer just aesthetic objects, but also carriers of knowledge and ideas [17-20]. In terms of artistic creation, AI will become a new creative medium and means of expression, giving artists more space for imagination and exploration [21-23]. Therefore, at present, AI painting has been able to be practically applied in a variety of industries, and the field of oil painting creation will certainly become closer and closer to the development of AI, and the trend of the combination of the two will become more and more obvious.

First of all, the integration of AI technology and art creation can allow artists to break the boundaries of the medium and obtain virtual images of oil paintings with digital media interaction. Qi, A. et al. used AI technology to expand the expressive potential of traditional oil paintings, and not only enhanced the emotional power of the oil paintings, but also provided a brand-new perspective for art appreciation and creation by creating a virtual interactive environment between the oil paintings and the audience [24]. Ping, Y. developed a three-dimensional display system for oil paintings based on visual communication, which greatly improved the clarity and visual effect of oil paintings by accurately restoring the color levels and details in the original oil paintings [25]. Through such practices, artists can explore the seemingly hidden laws and aesthetics of the real world and transform them into artistic creations with depth and breadth.

Secondly, the emergence of artificial intelligence technology improves the picture texture and emotional expression in traditional oil paintings. Guo, H. et al. established an emotional analysis model for oil painting creation based on neural network algorithms and big data, which can effectively analyze the emotional expression of oil painting themes in public environments, and help the artist to create works with unique emotions [26]. Dutta, T. et al. proposed an emotion-based based oil painting technology, which supports the creation of emotion-based oil painting images by using smart devices to sense the colors and lights during the oil painting process to help artists understand the emotional state they convey [27]. Since the emotional expression of an artwork determines the aesthetic experience of the final work, affecting its artistic appeal and audience differentiation, the new creation and expression methods opened up by AI have great application value for improving the artistry of oil paintings.

Furthermore, the use of artificial intelligence technology with can help artists to analyze the traditional oil painting works and the style and techniques in the artist. Zhao, Y. constructed a deep neural network based oil painting image excitation and color texture feature extraction model, through the analysis of oil painting art image, to help the artist to understand the artistic connotations of the oil painting image, and to contribute to the promotion of the development of the modern oil painting art [28]. Bai, S. et al. applied an embedded learning model incorporating visual perception algorithms to oil painting classification and simulation, and classified oil painting creation styles and related artists through image feature extraction and analysis, which facilitated the oil painting creation process [29]. The support of technology enables artists to summarize painting techniques and create innovative creations with higher efficiency, exploring the artistic expression effects that are difficult to achieve by traditional methods, thus promoting the development and change of the field of oil painting creation.

Finally, AI can also assist in creating new visual art works. Jin, X. et al. studied the style design and image optimization processing techniques for oil painting creation, using a genetic algorithm-based machine vision method to explore the stylistic features of oil painting graphics and can optimize the creation of oil paintings based on this [30]. Wang, Y. et al. showed that the image style migration technique can help artists to create different styles of art works, and the image creation can be optimized by identifying and combining the color, texture and other style elements in the oil painting materials [31]. Using the style migration technology of neural network, an art style can be applied to different images to create unique art works, and the integration of art and science has become a new proposition for oil painting creation.

It can be seen that artificial intelligence technology brings many new changes to the creation of artistic innovation. With the help of artificial intelligence algorithms can bring more opportunities for the birth and extension of oil painting creation, and add more prosperity and splendor to the social life and spiritual world of mankind.

The article mathematically models the process of oil painting by analyzing the visual perception of art. Based on the basic theory of reinforcement learning, a model-based Deep Deterministic Gradient Policy Algorithm (DDPG) is proposed to decompose the target image into strokes, and then reconstruct it through the renderer, in order to simulate the creation of multiple strokes for oil painting, and thus assist in the creation of oil paintings. The rendering performance of the DDPG model is measured and compared with other renderers, and the superiority of the DDPG model in rendering effect is judged by the stroke accuracy of the generated oil painting. Finally, the DDPG model is compared with other oil painting assisted generation models, and the superiority of each oil painting assisted generation model is judged through subjective and objective evaluations.

Oil painting technology innovation based on artificial intelligence algorithm
Mathematical modeling and painting process for oil painting
Visual perception of art

Oil painting is the creation of an artist or an individual through his or her imagination and conception of objective things or abstract thoughts. When external things enter the artist’s creative inspiration area, the artist first recognizes and perceives different scenes, and then carries out different degrees of segmentation of the relevant scenes, and then determines the specific position of each object, and in which area of the painting each object should be. The process of generating the oil painting is shown in Figure 1.

Figure 1.

The process of oil painting

The process of digital oil painting

Color Statistics

Usually, the color distribution in natural images is not balanced, while the color distribution in oil paintings is more harmonious. In this paper, blue in the traditional sense is specified as the cold pole, while orange in the traditional sense is specified as the warm pole. In RYB space, the psychological color temperature is expressed as: Colortemperature=Saturation*sin[Hue]/{[Saturation*cos(Hue)^2+1]} $$Color\,temperature = Saturation*sin[Hue]/\left\{ {[Saturation*cos(Hue\hat )2 + 1]} \right\}$$

In this case, as the saturation of the color changes, so does the color temperature. In terms of psychological color temperature, oil paintings show a tendency to be on the warm side. In terms of saturation, the distribution of oil paintings is relatively even.

Placement of brushes

The specific operation process in the process of brush placement: first of all, it is necessary to select a bunch of brushstroke dictionaries, which are required to have different colors, different sizes and various shapes from each other. Then, these brushstroke dictionaries are placed on the canvas for oil painting, their total energy is calculated, and they are placed in the order of the total energy from the largest to the smallest, thus forming a link. Repeat the process, arranging the brushes in order, from largest to smallest, until the brush with the smallest stroke value is placed. In the process of placing brushes, summarize to get the model about the placement of brushes.

Image Layering Method

Usually, the first thing a painter should consider when conceptualizing an image is how to layer the image. Observe what objects there are in the image, analyze the relationship between objects and objects in terms of masking and being masked, so as to provide more abundant information for further oil painting. In order to be able to better complete the process of oil painting and provide more sufficient information for oil painting, this paper adopts Mean Shift algorithm for image segmentation [32-33].

Given n sample points x1, x2, ..., xn in a d-dimensional space Rd, the Mean Shift vector at point x can be defined by Equation (2): Mh(x)1kxiSh(xix)

where Sh is a high-dimensional spherical region of radius h that satisfies the following relation for the set of y points: Sh(x){y:(yx)T(yx)h2}

k means that out of these n sample points xi, k fall into the region.

In equation (2), (xix) denotes the offset vector of sample point xi with respect to point x, and the Mean Shift vector Mh(x) defined in equation (2) is the sum and then average of the offset vectors of k sample points falling into region Sh with respect to point x. Intuitively, a sample point xi is obtained by sampling from the probability density function f(x) because the direction in which the probability density increases the most tends to be referred to by a nonzero probability density gradient. Therefore, within the Sh region, the sample points mostly fall in the region where they are located along the direction of the probability density gradient, so the direction of the Mean Shift vector Mh(x) and the direction of the probability density gradient should be the same.

The Mean Shift schematic is shown in Fig. 2. Sh is the circled area of the large circle, the small circle indicates the sample points that fall into the area, the black dot indicates the benchmark x of the Mean Shift, and the arrow indicates the offset, which is the offset between the sample points and the benchmark. In the figure it can be seen that the mean shift toward Mh(x) would refer to the direction of the gradient of the probability density function, which is represented in this figure as the region with the largest distribution of samples.

Given an initial point x and a tolerance error ε, the Mean Shift algorithm performs the following three steps in a loop until the end condition is satisfied.

Calculate the value of Mh(x).

Assign the value of Mh(x) to x.

If |Mh(x) − x| < ε, the loop ends. (Otherwise, return 1).

Analyze the main structure

In the process of using the computer to draw digital oil paintings, the most critical thing is how to be able to better draw brushes for a reasonable layout according to the given conditions.Guo et al. proposed a more complete model of the main primitive diagram. It mainly covers the following contents.

First, the principal primitive graph model takes wavelet theory as the core and proposes a primitive parsimony graph model related to natural images.

Second, the principal primitive model proposes the sketching pursuit algorithm, which mainly collects and represents the primitive parsimony map of the image.

Third, the principal primitive model defines an image as an image that has the property that it can be depicted.

Fourth, the main primitive model represents a primitive dictionary by learning the original parsimony graph.

Diffusion of directions

Diffusion technique is more widely used in the field of computer, especially in the process of processing images, because diffusion technique can effectively smooth the noise generated in the image. In the drawing process, the main structural part of the image can be diffused, which can be generalized to the non-structural part. The heat diffusion equation is shown in equation (4): ut=u

Figure 2.

Mean Shift diagram

Also, it is possible to redefine thermal diffusion in diffusion technology, which can be defined as a drop in energy or a cooling process of energy. There are two main forms of equations concerning energy, one of which is the continuous form of the equation (Eq. 5) and the other is the discrete form of the equation (Eq. 6): ε(u)=x,y|u|2dxdy ε(u)=12k(uk+1uk)2

According to the strategy related to gradient descent, it is easy to find out the process about the energy descent, which leads to the discrete solution of the equation. One point to note is that thermal diffusion and directional diffusion are quite different and cannot be applied to each other.

Reinforcement Learning Based Multi-Stroke Drawing Simulation

Based on the previous oil painting process, this subsection proposes a model-based Deep Deterministic Gradient Strategy algorithm (DDPG) [34]. The core of the algorithm is a painting intelligence whose goal is to decompose a given target image into a number of strokes, and let these strokes reconstruct the target image on the drawing board through a renderer, which is “model-based” in the sense that it utilizes the explicit model of the discriminator and the neural renderer.

Modeling

The environment of the painting simulation mainly consists of a target image I, a drawing board C, and a neural renderer R, with the initial state of the board being C0. Given a finite number of steps n, at each step, the renderer renders a new state of the board, Ct+1, i.e., Ct+1 = R(Ct, at), based on the current state of the board, Ct, and the received stroke control parameters, at, where t is the number of the current steps, and the goal of the intelligent is to find a set of sequences of stroke control parameters, a1, a2, ..., an, such that the the final rendered drawing board Cn is as close as possible to the target image visually. This process is modeled as a Markov decision process with state space S, action space A, state transfer function trans(st, at), and reward function r(st, at).

State and state transfer function

In reinforcement learning state represents the information observed by the intelligence from the environment. Here the state is divided into three parts: the state of the drawing board, the target image, and the current step ratio, i.e., st = (Ct, I, t/n), where the state of the drawing board Ct and the target image I are represented by a 24-bit RGB image with the image size of H × W × 3, and the step ratio t/n is a decimal number that provides information about the number of remaining steps, which facilitates the planning by the intelligent body. The state transfer function st+1 = trans(st, at) defines how the drawing board is transferred between states from this step to the next, which is partially known due to the use of the neural renderer R.

Actions

Action atA consists of a set of parameters that control the position, shape, and color of the stroke, which are small numbers between [0,1]. In this paper, the intelligent’s strategy function π is defined as a deterministic strategy function, i.e., π : SA. At the moment of t, the intelligent observes state st and then uses the strategy function to predict the parameters controlling the stroke of the next stroke at = π(st), and the environment receives the action and transfers to the next state st+1 = trans(st, at).

Reward

Choosing an appropriate reward function and calculating the reward the intelligent body receives at each step is the key to training the intelligent body. In this paper, the reward function is defined as: rt=r(st,at)=dtdt+1

where rt denotes the reward obtained by the intelligent in step t, dt denotes the distance between Ct and the target image I, and dt+1 denotes the distance between Ct+1 and the target image I. In this paper, we draw on the idea of generative adversarial networks to calculate the distance between two images by constructing a neural network-based discriminator D: d(C,I)=D(C,I)

The input to the discriminator is a combination (C, I) consisting of a drawing board image and a target image, and the output D(C, I) of the discriminator is higher when the difference between the drawing board and the target image is smaller, and the output D(C, I) of the discriminator is lower when the difference between the drawing board and the target image is larger.

Learning algorithms

Model-based DDPG algorithm

When using Neural Renderer R, the state transfer function is partially known, i.e.,: st+1=(Ct+1,I,t+1n)=trans(st,at)=(R(Ct,at),I,t+1n)

Meanwhile, the discriminator-based reward function is also known, i.e., in this paper, we improve the DDPG algorithm into a model-based DDPG algorithm by explicitly using these two known functions. First, the value network is used to approximate the state value function instead of the action value function, i.e: V(st;WV)

where WV is a parameter of the value network. The steps of the improved TD algorithm become:

Use the strategy network to calculate the next action to be taken: at+1=π(st;Wπ)

Calculate the TD target: v^t=rt+γV(st+1;WnowV)

Calculation of damages: L=12[V(st;WnowV)v^t]2

Update the parameters: WnewVWnowVαVWVL

where αV, WnowV, WnewV represents the learning rate, current parameters and new parameters of the value network, respectively.

Then, the method of following the parameters of the strategy network is improved, in the original DDPG algorithm, the strategy gradient is obtained by directly deriving the action value function, i.e.: gt=WπQ(st,π(st;Wπ);WQ)

Here, the Bellman equation is applied before the derivation of the state value function, i.e: gt=WπV(st;WV)=Wπ(r(st,at)+γV(st+1;WV))=Wπr(st,at)+γWπV(st+1;WV)

As: r(st,at)=d(Ct,I)d(Ct+1,I)=d(Ct,I)d(R(Ct,at),I)

Can be obtained by the chain rule: Wπr(st,at)=Ct+1d(Ct+1,I)WπR(Ct,at)

Ditto: WπV(st+1;WV)=st+1V(st+1;WV)WπR(Ct,at)

Hence the strategy gradient: gt=WπR(Ct,at)(Ct+1d(Ct+1,I)+γst+1V(st+1;WV))

Continuing to use the chain rule to derive the strategy function gives: gt=Wππ(st;Wnowπ)aR(Ct,at)(Ct+1d(Ct+1,I)+γSt+1V(st+1;WV))

The above equation has four main parts, which are: {Wππ(st;Wnowπ)aR(Ct,at)Ct+1d(Ct+1,I)St+1V(st+1;WV)

Since the policy function π, the renderer R, the distance function d and the value function V all use a neural network, the four derivatives above can be derived explicitly. Finally, the parameters of the policy network are updated using the following equation: WnewπWnowπ+απgt

Discriminators

In the framework of this paper, the discriminator used to compute the reward is also learnable. Let the parameters of the discriminator be WD. For the data st = (Ct, I, t/n) drawn from the experience cache, this paper defines x = (Ct, I) as a fake sample and y = (I, I) as a real sample, thus defining the learning objective of the discriminator as: maxWE[D(y;WD)]E[D(x;WD)]

Network structure

In this paper, a network structure similar to ResNet-18 is chosen as the feature extractor in the strategy network and value network. The inputs of the strategy network and value network are the states st = (Ct, I, t/n) (strategy network C = Ct, value network C = Ct+1) defined earlier, where C and I are both image data with the dimensions of H × W × 3. In this paper, the drawing board image, the target image, the step ratio, and the coordinate channel are spliced together to obtain an image tensor with the dimensions of H × W × 9 as the input data of the feature extractor.

Creative effect analysis
Rendering Performance
Stroke accuracy

In order to verify the ability of the DDPG neural renderer constructed in this chapter to generate the accuracy of oil painting strokes, the generative network of DCGAN (DCGAN-G), the UNet network, the PixShuffleNet structure, and the Dual-Pathway model (Dual-Pathway) are selected to constitute the neural renderers respectively in this subsection. Experiments will be conducted to compare the accuracy of the strokes generated by these five neural renderers as well as the perception of the image details of the generated oil paintings.

The A-E curves of the accuracy of randomly generated strokes by the DCGAN-G, UNet, PixShuffleNet and Dual-Pathway renderers and the DDPG renderer in this paper are plotted separately, all the renderers are trained at the same settings, where the horizontal axis is the training batch, and the vertical axis is the accuracy of randomly generated strokes. The results of the stroke generation accuracy comparison are shown in Figure 3.

Figure 3.

The accuracy of brushstroke generation

From the A-E curves plotted in Fig. 3, it can be seen that the weight parameters are continuously optimized at a training batch of 150, when the training accuracy of the neural network gradually increases. At around 250 rounds of training batch, the network gradually converges and stabilizes.

Among the five renderers, PixShuffleNet generates strokes with an accuracy of about 64.12%, the UNet renderer generates strokes with an accuracy of about 76.03%, the DCGAN-G renderer generates strokes with an accuracy of 80.29%, the Dual-Pathway renderer generates strokes with an accuracy of 77.04%, and the DDPG constructed in this paper network constitutes a renderer that randomly generates strokes with the highest accuracy of about 83.38%, which is better than other classes of renderers. The DDPG neural renderer constructed in this paper clearly has better generation capability.

Algorithm runtime

In addition to the accuracy of the generated strokes, on the other hand, the operation efficiency of each model is evaluated using the computation time. The DCGAN-G, UNet, PixShuffleNet and Dual-Pathway renderers are used on RTX 3080Ti with the DDPG renderer of this paper for 2000 iterations on 10 experimental images, respectively. The data calculated by the two algorithms are shown in Table 1.

Algorithm running time comparison

Picture number DCGAN-G (s) UNet (s) PixShuffleNet (s) Dual-Pathway (s) DDPG (s)
1 6.43 7.29 6.61 7.45 4.72
2 7.83 7.95 6.81 7.92 5.21
3 7.04 8.38 7.29 6.98 3.99
4 7.74 6.78 7.33 7.05 4.32
5 7.83 6.70 6.99 7.01 4.14
6 6.57 6.86 8.02 8.18 4.34
7 6.34 7.66 6.64 7.27 4.18
8 6.13 7.38 8.24 6.79 4.65
9 5.80 7.06 7.43 7.18 3.63
10 7.28 7.56 7.74 7.23 4.16
Mean 6.90 7.36 7.31 7.31 4.33

From Table 1, it can be seen that under the same configuration, the average value of the running time of DCGAN-G, UNet, PixShuffleNet and Dual-Pathway algorithms are 6.90, 7.36, 7.31, 7.31 seconds, respectively, and the average value of the time of this paper’s DDPG algorithm is 4.33 seconds, which is faster than the other renderer algorithms by 4.33 seconds, respectively, under the guarantee of generating the strokes with accuracy of 2.57, 3.03, 2.98, and 2.98 seconds. The data show that the DDPG algorithm proposed in this paper is able to quickly obtain oil painting creations that are more similar to the strokes of the content map with higher accuracy.

Creative effects

In order to obtain the effectiveness of oil painting creation techniques assisted by artificial intelligence, this paper adopts both subjective and objective evaluations to quantify the effectiveness of oil painting creation based on artificial intelligence algorithms.

Subjective evaluation

This subsection conducts a human perception study for the proposed model-based Deep Deterministic Gradient Policy Algorithm (DDPG) to further analyze its effectiveness. In that study, web pages of oil painting images generated by different models were presented to participants engaged in artistic and non-artistic fields, and the participants were asked to score or describe the quality perception of a single oil painting image. Feedback data from the participants were collected and statistically analyzed to derive an assessment of the advantages of the proposed model over the other models, including the visual quality of the generated oil painting images and the accuracy of the artistic style of the oil paintings. The score ranges from 1 to 5, with 5 being the most attractive. A total of 1200 ballots were collected from 30 different participants, and the distribution of scores and the average score for each model are shown in Table 2, where the rows represent the number of votes for the scores obtained and the last column represents the average score for each model. It can be seen that the DDPG model proposed in this paper obtained more high-scoring votes (votes greater than 3 points), with 53.58% of the total number of high-scoring votes and the highest average score (808.2). None of the other models’ high-scoring ballots accounted for more than 35% of the total ballots, and their mean scores were between [606, 707.8], which is a large difference from the DDPG model. These results indicate that the oil painting images generated using the DDPG model in this paper have a rich oil painting art style, which can maximize people’s needs and gain subjective recognition from viewers.

Human perceptual study vote results

Model Score-1 Score-2 Score-3 Score-4 Score-5 Average
DCGAN-G 280 355 285 218 62 605.4
UNet 159 362 336 310 33 659.2
PixShuffleNet 300 317 308 203 72 606
Dual-Pathway 252 293 320 262 73 642.2
Gatys 255 305 266 301 73 646.4
AdaIN 183 282 325 233 177 707.8
CycleGAN 336 284 246 247 87 613
Pix2Pix 274 267 331 244 84 639.4
DDPG 152 185 220 356 287 808.2
Objective evaluation

After completing the subjective evaluation, the effectiveness of the DDPG model proposed in this paper on oil painting creation generation is again verified through objective evaluation. In this subsection, quantitative experiments are first conducted based on the DDPG model, and the results show that the oil painting images generated by the DDPG method proposed in this paper can effectively train the corresponding scene image recognition model. In order to demonstrate the robustness of the DDPG method proposed in this paper, another experiment is conducted based on two additional scene recognition models. Then, qualitative experiments are added for comparison in terms of visual effect. In this paper, the oil paintings generated by the algorithm for each style type (classicism, romanticism, realism, impressionism, abstraction, etc.) are evaluated by using structural similarity (SSIM), mean square error (MSE), peak signal-to-noise ratio (PSNR), and perceived similarity (LPIPS) as the evaluation metrics. Finally, an objective evaluation was conducted to obtain a richer presentation of the results. The results of quantitative experiments are shown in Table 3.

Quantitative experimental results

Model Genre SSIM↑ MSE↓ PSNR↑ LPIPS↓
AdaIN Classicism 0.40 0.1353 10.56 0.4734
Romanticism 0.39 0.1483 10.11 0.3919
Realism 0.54 0.1149 11.63 0.4792
Impressionism 0.48 0.1625 11.80 0.4467
Abstract art 0.42 0.0901 9.67 0.4530
Average 0.45 0.1302 10.75 0.4488
SANet Classicism 0.36 0.1288 11.62 0.5259
Romanticism 0.57 0.1525 10.79 0.3841
Realism 0.40 0.1218 11.74 0.5489
Impressionism 0.48 0.0485 12.12 0.5248
Abstract art 0.57 0.1302 11.08 0.4259
Average 0.48 0.1164 11.47 0.4819
StyTr2 Classicism 0.42 0.0354 9.13 0.4979
Romanticism 0.52 0.1535 11.80 0.4604
Realism 0.42 0.1686 10.27 0.4362
Impressionism 0.47 0.1187 10.64 0.5956
Abstract art 0.54 0.0988 10.30 0.6002
Average 0.48 0.1150 10.43 0.5181
AdaAttN Classicism 0.39 0.1658 11.16 0.6098
Romanticism 0.53 0.0977 9.33 0.5447
Realism 0.50 0.1464 9.18 0.5152
Impressionism 0.47 0.1244 9.39 0.5007
Abstract art 0.33 0.1386 10.79 0.5049
Average 0.44 0.1346 9.97 0.5351
DDPG Classicism 0.53 0.0179 12.14 0.3333
Romanticism 0.63 0.0182 12.84 0.3393
Realism 0.66 0.0175 12.50 0.3241
Impressionism 0.50 0.0181 13.27 0.3106
Abstract art 0.66 0.0194 13.25 0.3067
Average 0.60 0.0182 12.80 0.3228

Observing Table 3, it can be seen that the DDPG method achieves the best results in assisting the generation of five styles of oil paintings, namely, classical, romantic, realist, impressionist, and abstract, with SSIM, MSE, PSNR, and LPIPS values of 0.60, 0.0182, 12.80, and 0.3228, respectively, which exceed the results of other oil painting generation algorithms, and achieves the best objective evaluation.

Conclusion

This paper applies artificial intelligence algorithms to traditional oil painting creation, assists oil painting creation by simulating the drawing process of oil painting, innovates the traditional oil painting creation technology, and explores and analyzes the rendering function and creation effect of this paper’s algorithms in oil painting creation.

The generation stroke accuracy of DCGAN-G, UNet, PixShuffleNet and Dual-Pathway renderers and the DDPG renderer of this paper are 80.29%, 76.03%, 64.12%, 77.04% and 83.38%, respectively, and the DDPG algorithm of this paper obviously has a better performance of rendering oil paintings, and the performance of oil painting generation exceeds the performance of all the comparison models.

The DDPG model in this paper has more than half of the total number of high scoring votes, and its average score is 808.2. The high scoring votes of the other models account for less than 35% of the total number of votes, and their average scores are in the range of 606~707.8, which are obviously inferior to the DDPG model. The DDPG model in this paper received the highest subjective evaluation and viewer recognition. The DDPG method achieved optimal results in assisting the generation of five styles of oil paintings, and the values of SSIM, MSE, PSNR, and LPIPS were 0.60, 0.0182, 12.80, and 0.3228, which were the highest objectively evaluated methods of assisting the generation of oil paintings for oil painting creation.

Lingua:
Inglese
Frequenza di pubblicazione:
1 volte all'anno
Argomenti della rivista:
Scienze biologiche, Scienze della vita, altro, Matematica, Matematica applicata, Matematica generale, Fisica, Fisica, altro