Accès libre

Research on the Reconstruction Technology of Digitized Artworks Based on Image Processing Algorithms and Its Cultural Inheritance Value

  
25 sept. 2025
À propos de cet article

Citez
Télécharger la couverture

Introduction

The rapid development and wide application of digital technology has profoundly changed people’s life style and cultural concepts, and at the same time, it also puts forward new challenges and opportunities for the protection and inheritance of art works. The application of digital technology has a positive impact on works of art [1-4]. Digital technology provides new means for the protection and preservation of cultural artworks. Through digital technology, art works can be preserved in digital form, avoiding the time and space constraints brought by traditional preservation methods, and also reducing the damage to physical art works [5-8]. Digital technology promotes the dissemination and popularization of art works. Through the digital platform, we can enjoy all kinds of art works anytime and anywhere, which greatly enriches people’s cultural life [9-11]. Digital technology facilitates the research and exhibition of culture and art. Researchers can conduct in-depth research on cultural artworks through digital technology, while exhibition organizations can display cultural artworks to more audiences through digital technology [12-15].

However, the preservation and inheritance of digitized artworks also face some challenges. The rapid development of digital technology has led to the speed of updating digital artworks becoming faster. This means that the protection of digital art works needs to constantly follow the development of new technologies to ensure the integrity and accessibility of cultural and artistic materials [16-19]. The security of digitized artworks is also an important issue. Digital art works may be threatened by hacker attacks, data loss or malicious tampering, so it is necessary to strengthen network security protection measures [20-22]. Again, the inheritance of digitized art works needs to overcome technical barriers and digital divide. For some elderly people or people with low technical level, digital technology may cause barriers to use, leading to challenges in the inheritance of digital art works [23-26].

In order to make the reconstruction of digitized artworks possible, this paper takes the style migration of artworks as the core of this research, i.e., the reconstruction of digitized artworks is realized through the migration of their artistic styles. The style migration method based on generative adversarial network is taken as the basis of the whole research, the working principle of generative adversarial network is analyzed, and three common style migration models based on generative adversarial network, namely Pix2Pix, CycleGAN and StarGAN, are introduced respectively. Based on the CycleGAN network model, an asymmetric cyclic consistency structure is designed to match style transformation directions with different transformation difficulties, and a saliency edge extraction module is designed for extracting the saliency edge maps of real natural images, and the obtained saliency edge maps are used in the computation of saliency edge loss. In addition, cyclic consistency loss and saliency edge loss are introduced on the basis of the original loss of CycleGAN. The former computes the consistency of the source domain image and the converted source domain image, while the latter computes the obtained saliency subject edge map for balanced cross-entropy loss to improve the quality of the generated style migration image and enhance the model performance. Finally, the effect of this paper’s art style migration model is explored through migration experiments of art styles of digitized artworks. At the same time, this paper’s model and the traditional Pix2Pix method are used to digitally reconstruct three types of artworks, namely, painting artworks, sculpture artworks and craft artworks, and the eye movement data of the subjects are collected to compare the actual utility and performance of the digitized artworks reconstructed by the two methods.

Style migration model based on generative adversarial networks

With the continuous development of image processing and artificial intelligence technology, it becomes possible to realize the reconstruction of digitized artworks. Image style migration technology based on generative adversarial network is a mainstream means to realize automatic creation of painting art, which can automatically learn the style characteristics of a certain painting art and apply these style characteristics to the target image to change its style without changing the content information of the target image, which greatly innovates and enriches people’s way of acquiring the painting art, and also provides a way to realize reconstruction of digitized art works. It also provides a realization way for the reconstruction of digitized art works.

In this chapter, we will discuss the working principle of Generative Adversarial Networks and three style migration models based on Generative Adversarial Networks to lay the theoretical foundation for the subsequent work.

Generating Adversarial Networks

The main idea of generative adversarial networks is to think of generative and discriminative networks as two opposites [27]. Their two respective functions are similar to the counterfeiter and the police, while the generative network is the role of the counterfeiter, the discriminative network is the role of the police, the counterfeiter continues to create fake images, the police will continue to screen out fake images, and they continue to cycle between the counterfeiter in order to allow the police to screen out fake images, he will have to improve their own counterfeiting techniques, and by the same token, the police to continue to screen out fake images, also have to improve their screening ability, when the police screen out fake images, and when the police screen out fake images, also have to improve their screening ability, and the police have to improve their screening ability. Similarly, if the police want to continue to screen out fake images, they also have to improve their screening ability, when the police can not screen out which images are fake, it proves that at this time the counterfeiting technology of the counterfeiting personnel is the best, that is to say, at this time, the generative network generates the highest quality images. The corresponding objective function is calculated as shown in equation (1): minGmaxDV(G,D)=Ex~Pdaaa(x)logD(x)+Ez~Pmax(z)log(1D(G(z)))

Where, Pdata is the probability distribution of the real data, x is the real image image, Pnoise is the probability distribution of the generated data, z is the generated image and E is the expectation.

Pix2Pix network modeling

The Pix2Pix network is based on conditional generative adversarial networks, and conditional generative adversarial networks are still distinguished from generative adversarial networks in that it not only learns the mapping of the initial values z to the real image, but also observes the mapping of the original image x to the real image [28]. It should be noted that Pix2Pix networks are trained using paired datasets. The objective function of the conditional generative adversarial network is shown in Equation (2): G*=argminGmaxDLcGAN(G,D)+λLL1(G)

where G is the generative network, D is the discriminative network, minG is G to minimize the objective function, and maxD is D to maximize the objective function.

The network structure of Pix2Pix still uses the combined form of generative and discriminative networks, both of which are in the form of convolution plus batch normalization and ReLU activation. In order to solve the problem of generative networks having bottleneck layers, the jump connections in U-Net networks are finally used, mainly in the form of, adding jumpers between each i-layer and n-i-layers, where each jumper can simply connect the feature channels of the i-layer and n-i-layers together, where n is the total number of layers in the network.

CycleGAN network model

Like the Pix2Pix network in the previous section, this kind of network can achieve a good conversion effect, but there is a very obvious defect, that is, it can only be used in pairs of datasets, if the use of ordinary datasets, the conversion effect is not ideal, this situation is equivalent to the supervised learning, but this kind of pairs of datasets to obtain not only the idle difficult, but also more expensive.

If you modify the sentence structure of a sentence and convert a “put” word into a “be” word, and then the “be” word into a “put” word, then the resulting “put” word should be the same as the original “put” word. In addition to the adversarial loss, the cyclic consistency loss is also added, and the objective function is calculated as shown in equation (3): G*,F*=argminG,FmaxDX,DYL(G,F,DX,DY)

In this, generative network G converts the image from X domain to domain, generative network converts the image from Y domain to X domain, discriminator DX is used to determine whether the input image is an image in X domain and discriminator DY is used to determine whether the input image is an image in Y domain.

CycleGAN uses a generative network architecture that starts with two convolutional layers with a step size of 2, and two transposed convolutional layers with a step size of 1/2, followed by 9 residual blocks connected to process images of size 256*256, and uses instance regularization to improve the quality of the style migration of the feedforward network [29]. The discriminative network uses a 70*70 PatchGAN structure where each block comes to determine whether the image is real or not, this structure has fewer parameters and can handle images of any size.

StarGAN network model

StarGAN is a new generative adversarial network that requires only a generative network and a discriminative network for multiple domain transformations. For images, attributes are defined for the features inherent in the image, for example, the attributes of a person’s eyes can have single eyelid and double eyelid, etc., and the attributes of the nose can have a high and collapsed nose, and the domains actually represent a set of images that have the same attributes, so that black hair represents one domain and yellow hair another. StarGAN takes the information of the images and domains together as inputs, and it mainly learns How the input images can be flexibly transformed to the corresponding domains and use the labels to represent the information of the domains.

The calculation of loss function of generative network is shown in equation (4) and the calculation of loss function of discriminative network is shown in equation (5): LD=Ladν+λclsLclsr LG=Ladν+λclsLclsr+λrecLrec

Where, LD is the loss of the generative network, LG is the loss of the generative network, Ladv is the adversarial loss, Lcls is the categorization loss, λcls is the categorization loss weight, Lrec is the recall loss and λrec is the recall loss weight.

StarGAN’s image transformation performs better for face attributes and expressions.

Artistic style migration model based on CycleGAN network

In recent years, the style migration technique based on generative adversarial network has been widely applied to the style migration work of artworks and achieved good results. In this chapter, CycleGAN, a style migration model based on generative adversarial networks, will be used as the basis to design a novel asymmetric cyclic consistency structure to obtain high-quality stylized images of artworks and realize the reconstruction of digital artworks.

Quantifying the asymmetry of information richness

Considering that an image domain contains several images, we use the average image entropy of all images in each image domain as the image entropy value for that domain by calculating the average image entropy of all images in that domain [30]. The image entropy of an image domain is defined as follows: mEntropy=1Nn=1NEntropy(imgn)

Where N denotes the total number of images in an image domain, imgn denotes the image instances in the image domain, and Entropy(*) denotes the image entropy value calculation on the input image, the detailed definition formula is shown in Equation (7): Entropy=1cc255j=0255P(c,i,j)×logP(c,i,j)

where c denotes the channel index of the HSI color space channel, i and j denote the length and width positions of the image, and P(c,i,j) denotes the probability of the number of occurrences of the pixel at the corresponding position over the total number of pixels.

In order to measure the image entropy difference between the two image domains in the image conversion task more intuitively, we calculate the information entropy ratio between the two image domains, which is defined in detail as follows: EntropyRatio=max(mEntropy(X),mEntropy(Y))min(mEntropy(X),mEntropy(Y))

where X and Y denote different image domains in an image conversion task.

In order to better measure the information asymmetry of the artwork style migration task, we calculated the information entropy ratios of several generic image conversion tasks at the same time. It can be found that the information entropy ratios of both the artwork style migration task and the asymmetric image conversion task are larger than the information entropy ratio of the symmetric image conversion task, which indicates that the artwork style migration task is characterized by the asymmetry of the domain information richness, and it can be regarded as an asymmetric image conversion task.

Asymmetric cyclic consistency structure

The asymmetric loop consistency structure network consists of two generators, two discriminators and a saliency edge extractor (SEE). Generator G is a residual block-based structure for style migration from real nature photos to Chinese art works image domain; Generator F is a dense block-based structure for style migration from Chinese art works to real nature photos image domain. Two discriminators DX and DY are used to determine the authenticity of the generated real nature photos and the generated art work images, respectively. The saliency edge extraction module was used to extract the saliency edge maps in the images.

Generators

Based on the symmetric cyclic consistency structure of Cycle GAN, we targeted to improve the structure of generators G and F for the information richness asymmetry characterizing the Chinese artwork style migration task. We designed Generator F with Dense Block as the core transformation module, which has stronger feature characterization capability than Generator G with Residual Block as the core transformation module, and can effectively match the transformation difficulty from the artwork image domain with low information complexity to the real natural image domain with high information complexity.

The generator in CycleGAN consists of an encoder and a decoder, and the middle contains a residual block-based conversion module that determines the characterization ability of the whole generator. Considering that the dense connection mechanism used by dense blocks has the advantage of enhancing feature propagation and encouraging feature reuse, replacing the residual blocks in the conversion module with dense blocks can effectively improve the characterization ability of the generator.

Discriminator

The main role of the discriminator is to distinguish the generated image from the real one. In this chapter, we use a 70×70 PatchGAN as the discriminator, which consists of multiple convolutional layers, outputs a probability matrix of 1 n × n (each value represents the probability of authenticity of a block region of the original image, which can help the model to pay more attention to the details of the image compared to the original discriminator), and finally, the average value of the whole probability matrix is used as the authenticity output result.

Significant Edge Extraction Module

We designed a saliency edge extraction module, including saliency detection part and edge detection part. The saliency detection technique is capable of modeling the human visual and cognitive system, and can detect salient regions in an image that are different from other regions. We use a pre-trained PFAN network to detect the region mask of the salient object in the image, which can effectively represent the main region of the painting in the art work. We use the excellent edge extraction network HED to extract the edges of the image, which can extract edges with different degrees of thickness, which is helpful to simulate the different thicknesses of the brush strokes in the artwork. The significant edge map obtained from the significant edge extraction module is used to compute the significant edge loss during the training process of our model.

Loss function

Based on the original loss of CycleGAN, we introduce feature level based cyclic consistency loss and saliency edge loss.

Adversarial loss

The main role of the adversarial loss is used to optimize the generator and the discriminator so as to improve the generative ability of the generator and the discriminative ability of the discriminator. For the style migration from the real natural image domain to the direction of the artwork image domain, the use of the adversarial loss constraint model can generate an output image that is closer to the artwork image. The detailed calculation of the adversarial loss is shown in equation (9): lossGAN(G,DY,X,Y)=Ey~YlogDY(y)+Ex~Xlog(1DY(G(x)))

where X and Y represent the data distributions in the real natural photo domain and Dunhuang mural image domain, respectively. An identical confrontation loss lossGAN(F, DX, X, Y) is applied to the style migration from the artwork image domain to the direction of the real natural image domain, which is used to constrain the model-generated images to be closer to the real natural image domain.

Identity Loss

The identity loss function can help the model to avoid meaningless transformations, and at the same time constrain the model to keep the same color distribution of the input image and the output image, the detailed calculation is shown in Equation (10): lossIdentity(G,F,DX,DY)=Ex~X[||F(x)x||1]+Ey~Y[||G(y)y||1]

Cyclic consistency loss based on feature level

We adopt feature-level based cyclic consistency loss, i.e., we use the L1 loss of the pre-trained high-level features in the VGG network to replace the original pixel-level based L1 loss. The feature-level based loss can be a more robust measure of image similarity compared to the pixel-level based loss, and the detailed calculation is shown in equation (11): lossCycle(G,F,X,Y)=Ex~X[||VGGcon44(F(G(x)))VGGcon44(x)||1] +Ey~Y[||VGGcom44(G(F(y)))VGGcom44(y)||1]

Where VGGcom4_4(*) represents the output feature of the conv4_4 layer in the pre-trained VGG network, which can effectively characterize the content structure information of the image.

Significance edge loss

We propose a saliency edge loss to simulate the stylistic feature of prominent body strokes in artworks. Firstly, the real natural image and the generated artwork image are input to the saliency edge module, and then the obtained saliency subject edge map is balanced cross-entropy loss calculation, so as to obtain the saliency edge loss, and the detailed formula is shown in Equation (12): lossSalientEdge(G,X) =Ex~X[1Nn=1NαE(x)nlog(E(G(x))n)+(1α)(1E(x)n)log(E(G(x))n)]

Where N is the total number of pixels in the input image corresponding to the saliency subject edge map, E(·) is the edge extraction model HED network, and α is a balanced weight factor.

Total target loss

Finally, the total objective loss of the model is calculated in equation (13): L(G,F,Dx,DY,X,Y)=λ1lossGAN(G,DY,X,Y)+λ2lossGAN(G,DX,X,Y) +λ3lossCycle(G,F,X,Y)+λ4lossIdentity(G,F,X,Y)+λ5lossSalientEdge(G,X)

We aim to optimize the following objective function as detailed in Eq. (14): G*=argminG,F,DX,DYL(G,F,DX,DY,X,Y)

Experiments on Artistic Style Migration of Digitized Artworks

In the previous section, this paper constructs an art style migration model based on CycleGAN network, and introduces cyclic consistency loss and saliency edge loss on the basis of the original loss of CycleGAN. In this chapter, we will carry out art style migration experiments for digital art works to explore the impact of the introduction of cyclic consistency loss and saliency edge loss on image style migration, and further analyze the art style migration effect of this paper’s model in digital art works.

The version of the experimental operating system in this paper is Ubuntu16.04, the CPU version is Inter(R)Core(TM)i7-7800X@3.50GHz, the GPU version is NVIDIA GeForce GTX 1080Ti, and python3.6.9 is used. The running memory of the server is 64G.

The experimental datasets in this chapter are horse2zebra, summer2winter, and monet2phone provided by CycleGAN. The facades dataset is also used as a unified dataset for comparison of multiple algorithms. All the dataset images are 256×256 in size.

Loss function analysis

The art style migration model proposed in this paper introduces cyclic consistency loss and saliency edge loss. In order to analyze the improved CycleGAN loss in this paper, the μ value is set to 0, 1, 5, 10, 20, and 50, respectively, where the change of the μ value represents the proportion of the cyclic consistency loss and the saliency edge loss in the overall loss function. As the μ value increases, the change in cyclic consistency loss is specifically shown in Figure 1. The overall trend of the change in the cyclic consistency loss is first a rapid decrease, and then the whole trend slows down. Such change characteristics indicate that the source domain to target domain converted image again converted to the source domain has better results with the increase of the μ value, i.e., the generated image restored back to the color and feature changes are smaller, and its texture information is lost relatively less.

Figure 1.

Change of the consistent loss of the cycle

As the value of μ increases, the change trend of the significance edge loss is specifically shown in Figure 2. It can be seen that the whole trend of the saliency edge loss is accompanied by a larger value of μ to become smaller, and its overall trend is a rapid decrease to leveling off. This helps the generated image to learn more features of the transformed target domain while the irrelevant domain features of the source domain can be preserved as much as possible.

Figure 2.

Change of significant edge loss

When the loss is increased, the classification accuracy of the images is shown in Table 1. From the experimental results in the table, it can be seen that increasing the loss, by adjusting the loss to the appropriate scale hyperparameter μ, can improve the realism of the generated images, and the image classification accuracy of the monet2photo dataset, the horse2zebra dataset, and the summer2winter dataset can reach a maximum of 0.91 (μ =20), 0.87 (μ =5), and 0.87 (μ =10). However, the test results also show that the image classification accuracy of monet2photo dataset, horse2zebra dataset, and summer2winter dataset can be 0.78(μ =50), 0.72(μ =20), and 0.73(μ =20) at the lowest. Obviously, if μ is not selected appropriately, it will cause some images to be converted to a poor degree and the image classification accuracy will decrease. Therefore, the appropriate loss ratio hyperparameters should be selected according to the dataset and practical application requirements.

The classification accuracy of images

Data set Monet2photo Horse2zebra Summer2winter
Real image 0.93 0.91 0.88
μ =0 0.85 0.81 0.8
μ =1 0.85 0.74 0.82
μ =5 0.9 0.87 0.9
μ =10 0.89 0.79 0.87
μ =20 0.91 0.72 0.73
μ =50 0.78 0.85 0.8
Generate image effect analysis
Clarity assessment

The results of the sharpness measurements of the generated images are displayed as shown in Figure 3. Except for the summer2winter dataset when the value of μ is taken as 1 and 5, the image sharpness decreases. The sharpness of all other generated images is higher than the generated results of CycleGAN without adding cyclic consistency loss with significant edge loss. The experimental results show that the introduction of cyclic consistency loss with significant edge loss helps to improve the clarity of the generated images. In addition, the proportional hyperparameter μ of the loss should not be chosen too small and too large.

Figure 3.

The clarity of the generated image

Quality assessment of generated images

In the comparison experiments of different algorithms, the facades dataset is selected as the unified dataset in this section, and the algorithms selected for comparison are CycleGAN, DaulGAN & Pix2pix. The results of the quality assessment of the generated images for different algorithms are specifically shown in Table 2. Since CycleGAN is characterized by the ability to act on unpaired images. Therefore, the generation quality of the Pix2pix algorithm is significantly better than that of the traditional CycleGAN. However, this paper’s algorithm adds cyclic consistency loss and saliency edge loss on the basis of CycleGAN, and the quality of its generated images has a significant improvement, and the test results of SSIM and COSIN are 0.442 and 0.97, respectively, which are higher than Pix2pix.

Image quality assessment generated by different algorithms

Algorithm SSIM COSIN
CycleGAN 0.212 0.834
Algorithm of this article 0.442 0.97
DaulGAN 0.224 0.937
Pix2pix 0.422 0.96
Experiments and Analysis of Reconstructed Eye Movements in Digitized Artworks

This adopts the CycleGAN network based art style migration model proposed in this paper for the reconstruction of digitized art works, the selected art works, which are digitized and reconstructed, are mainly three categories of painting artworks, sculpture artworks and craft artworks, as shown in Table 3.

Artworks

Categories of artworks Name
Painting works of art The last dinner
Night tour
The Virgin of Rocks
Sculpture art David
Thinkers
Venus ‘broken arm
Craft artwork Terracotta Warriors
Tang tricolor
Blue and white porcelain

This chapter will use eye-tracking technology to explore the practical utility of this paper’s art style migration model applied to digital artwork reconstruction efforts through subjects’ attention and cognition while viewing reconstructed digital artworks.

Experimental setup

Subjects

Combined with the existing eye movement experiment related research, college students have a certain aesthetic judgment ability, and the experimental data obtained by choosing them for the eye movement experiment is representative and credible. In this experiment, 50 college students were randomly selected as test subjects.

In order to ensure the validity of the experiment, the participants were required to have normal visual acuity or corrected visual acuity, and were not allowed to wear colored contact lenses during the experiment, or the results would be affected.

Experimental comparison

The comparison object selected for this eye movement experiment is the digitized artwork obtained after art migration using the traditional Pix2pix algorithm, and the content of the reconstructed digitized artwork is kept uniform.

Experimental Tools

The Tobii T60 eye-tracker, which is a combination of an infrared camera and an inch screen, is used to detect and record data on the participants’ eye movements and visual pauses while viewing the images, while the Tobii Studio analysis software is used to drive the eye-tracker, design the experiments, and analyze the results in order to derive the data. The Tobii Studio analysis software was used to drive the eye-tracker, design the experiment, and analyze the results to generate data.

Eye Movement Data Metrics

Gaze duration is the sum of all gaze durations of a reconstructed digitized artwork by the 50 students tested.

The number of gaze times is the sum of all gaze times on a reconstructed digitized artwork by the 50 students tested.

Analysis of eye movement data

The subjects were invited to view the three digitized artworks reconstructed by the art style migration model of this paper and the traditional Pix2pix method, respectively, and the eye movement data obtained were collected as shown in Table 4. It can be seen that the subjects’ average gaze time reached 14.89s and the average number of gaze times was about 64 when viewing the digitized artworks reconstructed using the method of this paper. The average gaze time and the average number of gaze times obtained by the digitized artworks reconstructed using the traditional Pix2pix method, on the other hand, were 9.78s and 20 times, respectively. Among different types of digitized artworks, the average gaze duration of subjects viewing the digital painting artworks, digital sculpture artworks, and digital craft artworks reconstructed in this paper were 12.96s, 16.44s, and 15.27s, respectively, and the average number of times they looked at the artworks were about 60, 74, and 50 times, respectively, which were higher than that of the traditional Pix2pix method. In terms of both the average duration and the number of gaze times, the eye movement data of this paper’s method still significantly outperforms the traditional Pix2pix method in the reconstruction of different types of digitized artworks.

Eye tracking data

Method Categories of artworks Name Fixation time(s) Number of watch
Method of this article Painting works of art The last dinner 14.85 62
Night tour 9.49 44
The Virgin of Rocks 14.54 75
Sculpture art David 18.83 92
Thinkers 16.26 64
Venus ‘broken arm 14.22 67
Craft artwork Terracotta Warriors 14.31 67
Tang tricolor 12.73 71
Blue and white porcelain 18.77 32
Average 14.89 64
Pix2pix Painting works of art The last dinner 11.04 20
Night tour 13.23 23
The Virgin of Rocks 13.3 37
Sculpture art David 8.69 5
Thinkers 6.76 8
Venus ‘broken arm 10.15 34
Craft artwork Terracotta Warriors 9.44 29
Tang tricolor 8.24 5
Blue and white porcelain 7.17 18
Average 9.78 20
Research on the Cultural Transmission Value of Digital Artwork Reconstruction

In the above paper, this paper proposes an art style migration model based on CycleGAN network, which realizes the style migration of art works and promotes the reconstruction of digital art works. The reconstruction technology of digitalized art works as an innovative means to promote traditional culture and cultural inheritance, this chapter will further explore the value of reconstruction of digitalized art works for cultural inheritance from various aspects.

Developing aesthetic perspectives

The reconstruction of digital artworks not only ensures that the connotation of art remains unchanged, but also opens up the public’s aesthetic vision and increases the depth of cultural connotation dissemination. 3R digital art can rely on the exhibition hall to build an embodied immersive scene based on digital artworks, allowing the audience to change from passive “static viewing” to active “wandering”, from relying on their own aesthetic awareness to independent activities in the “real” three-dimensional exhibition space, emotionally accessing the appreciation process, and cooperating with multiple perceptions of vision, hearing, touch, taste, and smell, leading the audience to walk in the virtual space and feel the beauty of the artwork.

Enhancing Cultural Cultivation

Works of art have a strong aesthetic education and the function of cultivating people’s cultural temperament. Digital art works are art forms catering to the cognitive habits of modern people, so that the viewer does not need to deliberately withdraw from the busy life, you can always taste from the fun of interaction in the elegant mood, calm charm and other artistic infectious atmosphere, in the paintings to find a pure land, a hint of relaxation. The combination of the two can pass on and promote art and culture to a greater extent, and the cultivation of culture and artistic aesthetics can help the public find themselves in the world, adjust their mentality, and invisibly change the public’s perspective on society and the natural environment.

Enrichment of educational tools

Art theory and appreciation courses are inevitably raw and not easy to understand if they only rely on teachers’ verbal explanations. The combination of virtual and real scenes brought about by the reconstruction of digital artworks can make the educational scene more vivid and lively, break through the limitations of textbooks, PPTs and other flat teaching methods, and hand the classroom over to the students for independent exploration and communication. While activating the inner vitality of the artwork, it can also stimulate students’ interest in traditional culture, let students enter the art world, give full play to the idea of “learning by understanding”, feel the spirit and creative intentions of the artist, reach a spiritual communication across space and time, and get to know the works of art at a deeper level and understand the stories behind them, so as to realize the knowledge and emotion in teaching better. In order to better realize the goal of knowledge and emotion in teaching, and to guide students to form a correct aesthetic view.

Expanding Utility Value

The value of art works is also reflected in the practical level, the unique traditional aesthetics enriches the thinking of art design workers, and the characteristics of digital art works can make the connotation of the art works itself extend in the direction of painting, games and so on. For example, in the field of painting, digital artwork reconstruction technology can be used to simulate the painting brush techniques, hue changes, to create vivid and interesting painting effects, to form a digital artwork based on traditional painting heritage, in line with the spirit of the times and culture, to bring a fuzzy medium of the painting experience, to create a more immersive space for the creator of the painting.

Conclusion

Based on CycleGAN, a style migration model for generative adversarial networks, this paper constructs an art style migration model by improving the symmetric cyclic consistency structure of CycleGAN, introducing the cyclic consistency loss and the saliency edge loss and other aspects, which provides methods and approaches for the digital reconstruction of art works.

Conducting experiments on art style migration of digitized artworks to explore the impact of the introduction of cyclic consistency loss and saliency edge loss on style migration in the style migration model of this paper. When the μ value representing the proportion of cyclic consistency loss and saliency edge loss in the overall loss function increases, the trend of saliency edge loss gradually becomes smaller, while the overall trend of cyclic consistency loss first decreases rapidly and then the whole trend slows down. By adjusting the μ values, the image classification accuracy of monet2photo dataset, horse2zebra dataset, and summer2winter dataset can be as high as 0.91(μ =20), 0.87(μ =5), and 0.87(μ =10), and as low as 0.78(μ =50), 0.72(μ =20), and 0.73(μ =20). The quality of the generated images has a significant improvement with SSIM and COSIN of 0.442 and 0.97, respectively, which are higher than that of the comparison Pix2pix, and the clarity of the generated images is also improved. Obviously, the introduction of cyclic consistency loss and saliency edge loss overall improves the performance of the style migration model in this paper, but it is necessary to choose the appropriate loss ratio hyperparameters according to the dataset and practical application requirements μ.

The art style migration model in this paper and the traditional Pix2pix network model were used to digitally reconstruct the same artworks, and eye movement experiments were designed to examine the reconstruction effect of the digitized artworks. The average gaze time and the average number of gaze times obtained by the digitized artworks reconstructed using this paper’s model are 9.78s and 20 times respectively, which are higher than the average gaze time of 9.78s and the average number of gaze times of 20 times of the traditional Pix2pix method. In the digital reconstruction of paintings, sculptures, and crafts of different categories of artworks, the eye movement data of average gaze duration and average number of gaze times of the model in this paper still outperform the traditional Pix2pix method.