Open Access

Intelligent Recognition of Colour and Contour from Ancient Chinese Embroidery Images


Cite

Introduction

Ancient Chinese embroidery refers to the traditional skill of hand-embroidered patterns on wool, hemp, silk, cloth and other fabrics according to pre-designed colours and graphics. As one of the oldest handwork skills in China, this unique embroidery style with high artistic values and cultural merits has been formed throughout history [1] and is still in use. The majority of local traditional embroidery skills in China have been listed as national intangible cultural heritages, which need digital protection and inheritance. Ancient embroidery is also one of the important sources of oriental inspiration [2] and design elements for designers today.

Because of the rich colours and complex patterns, the needlework of traditional manual embroidery needs long-time systematic study, and its drawing requires a strong foundation of fine brushwork. Therefore, today's designers can hardly master this complex handicraft soon, and the acquisition of design inspiration and elements usually comes from the objects and images of ancient Chinese embroidery heritages. Ancient China had two kinds of embroidery according to functional classification. One is to form important decorative patterns on the surface of daily wares, such as clothing, shoes, hats and bags, and the other is to express ancient Chinese calligraphy and painting works for ornamentation. Chinese ethnic dresses have been the most well-known sources of Oriental cultures in the fashion world [3]. Therefore, in this study, the embroidery pattern on the daily clothing of ancient China is taken as the research object to identify colours and extract contours.

The embroidery images in ancient Chinese daily costumes are exquisite and colourful [4], with some being monochrome and some involving 50 colours, which create harmony with the background. When choosing the colours and graphics of embroidery patterns, designers usually face three problems. First, embroidery patterns change from physical objects to computer images, especially instant digital imaging, which is the most common approach. During the use of a digital camera, the exact same colour block usually changes significantly in light and shadow, due to the uneven surface of fabrics in light. Even in cloudy natural light, embroidery threads that form embroidery patterns protrude on the fabric surface and cause colour aberration [5]. Images scanned using portable scanners, although less affected by light, also contain uneven shadows. Two embroidery images scanned by a digital camera and Canon portable scanner are shown in Figure 1.

Fig. 1

Colour recognised by graphic software for embroidery images obtained by a camera (Part 1) and scanner (Part 2)

After colour recognition using image software, the result is a completely non-repeated colour parameter, although it is made of the same colour embroidered thread. Therefore, the difficulty is increased while designers are choosing and applying embroidery colours. Second, the embroidery contour is affected by the material and the needle method and is non-smooth. The graphic software cannot directly select the smooth contour for design innovation. It needs to be drawn manually, otherwise the details will be lost. Third, in addition to contours, the interior of embroidery patterns is usually made up of multiple different shapes. The joining between graphics is complex, as some are very close, some have certain spacing, and others are linked with regular points and lines. Therefore, designers need to spend much time on manually extracting pattern contours and graphic contours, and have a strong graphic software foundation.

In this study, 80 embroidery sample images were preliminarily screened from traditional daily dress of Folk Costume Museum of Jiangnan University and the Palace Museum, and these embroidery samples displayed in the two places are well preserved. When choosing embroidery patterns as design elements, designers usually prefer to choose embroidery works with complete patterns and bright colours in order to extract colours and contours more intuitively. Therefore, when choosing embroidery samples, we did not deliberately choose damaged or faded embroidery samples.

The example of embroidery complexity was taken of a Qing Dynasty embroidery pattern in Folk Costume Museum of Jiangnan University (Figure 2 - Original). The embroidery pattern consists of flowers, leaves and branches. The flower part comprises two front flowers, a side flower and a bud, the latter of which consists of pedicels and calyx. There are 12 leaves of different sizes and three branches. Each leaf is divided into two halves by the central vein. In addition to the stamens, each graphic is left with a narrow and smooth gap, and the stamens are very complex in shape. It is composed of radial line ends connected with dots, and the petal shape is also connected through lines. The embroidery pattern is rich in colours. The petal colours include red, pink, yellow, white, light green and four different shades of blue. Each of the larger petals has two colours. When the colour is extracted manually, the same colour, such as pink, is different among different positions. It is difficult to determine which pink can be used as the representative colour (Figure 2 – Part 1). A leaf consists of three different shades of green (Figure 2 – Part 2). Branch, pedicel, and calyx are the same greenish blue (Figure 2 – Part 1, Part 3). When the contour is extracted manually, each point, line and surface contour should be selected in turn, which is repetitive and time-consuming. Therefore, a more complex work is more difficult to be extracted manually by graphics software, and a suitable algorithm is more than needed to quickly and accurately identify the colour and extract the contour.

Fig. 2

Flower Patterns Embroidered in Qing Dynasty

Recently, some researchers used colour clustering algorithms to separate colours from fabric images. For instance, the colour image clustering FCM algorithm was used to reduce the number of colours and separate colour patterns [6]. The Gustafson-Kessel clustering algorithm was adopted in the CIE lab colour system for colour separation from machine embroidery images [7]. K-means clustering can be used for colour extraction of polychromatic fabrics [8]. RGB colour space was converted to CIE lab, and the improved K-means clustering was used to analyse interlaced multi-coloured dyed yarn woven fabrics. A segmentation model based on the fuzzy region was used to segment colour structure images in the CIE lab colour system [9]. The main colours of Chinese traditional clothing were identified based on a mean-shift clustering algorithm [10]. Based on the mean-shift algorithm and colour measurement, a consistent and reliable colour measurement of multiple colour patterns was established [11]. Based on the adaptive K-means algorithm, a new Python script was developed to automatically separate the colours in plant leaves [12].

The Canny edge detection algorithm is one of the most widely used algorithms in image processing, as it has good performance under different conditions [13]. Some scholars combined the colour clustering algorithm and Canny algorithm to identify fabric image contours. K-means clustering and Canny operators were adopted to extract the closed edge of the base element of a colourcarpet pattern colour [14]. A multi-scale segmentation algorithm based on region segmentation was used to segment fabric printing patterns to obtain a high-quality smooth pattern [15]. The printed fabric pattern contour was extracted by edge detection based on the Canny operator [16]. The image contour was identified by the Canny operator after de-noising the batik fabric image and marking the complete element [17]. The classical Canny edge detection algorithm is used to automatically select the threshold required to segment the gradient image to deal with the ambiguity of the region boundary in the image due to poor illumination, resulting in uncertainty in the gradient image [18].

However, these methods cannot be directly applied to the colour and contour extraction of ancient Chinese embroidery patterns. Therefore, this study proposes an edge detection method based on the K-means++ clustering algorithm and Canny operator to identify the colours and contours of embroidery patterns. This method is a new attempt to recognize the colour and contour of ancient Chinese embroidery patterns. It solves the problems of inaccuracy, large time-consumption and difficulty in colour and contour intelligent recognition faced by traditional manual recognition and extraction methods, and can provide important editable colour and graphic materials for designing innovation. It can also copy the embroidery patterns in ancient clothing on modern clothing and other textiles, which plays an extremely important role in the digital protection, inheritance, development and innovation of traditional embroidery.

Materials and Methods
Research framework

The framework of this study is shown in Figure 3. Step 1: A digital camera and flat panel scanner are used to get embroidery images. Step 2: The embroidery images are pre-processed with colour level adjustment, Gaussian filtering, sharpening and smoothing in turn. Step 3: The colours of embroidery images are recognized using K-means++ clustering. Step 4: The Canny operator is used to extract the pattern contours of the embroidery images.

Fig. 3

Research framework of method proposed

Since the embroidery pattern on a onecolour background is an important feature of ancient Chinese embroidery, our solution is mainly used to recognise the colour and contour of a one-colour background image.

Research methods
Acquisition of ancient Chinese embroidery images

Ancient Chinese embroidery works are collected in different museums, and the illuminant of each museum is different. Therefore, in order to enable designers to directly extract the colour and recognition of images taken during a visit to the museum, we do not use a special lightning cabinet with set illuminant. Only embroidery images taken in a non-dark environment can be recognised.

In this study, for the acquisition of ancient Chinese embroidery images, a combination of digital cameras were used [10], as well as a flat-panel scanner [9] to capture images. A Canon EOS REBEL T3i digital camera was used to capture ancient Chinese embroidery images, with the colour space of RGB, photometric mode of 5, P mode, and ISO AUTO. Under the lighting environment in the exhibition hall and without the glass barrier of the display cabinet, local embroidery patterns were directly shot from a close distance. Canon LIDE 300 flatbed scanners at a 2400DPI scan resolution, in the photo scan mode, and 300 DPI were applied.

The flatbed scanners can scan small ornaments in an A4 paper size range, and clothing larger than this size was taken with a Canon EOS REBEL T3i. In this study, because the actual size of the image taken and scanned was large, the cubic interpolation method based on a 4×4 pixel neighborhood was used to compress the images to shorten the time of computer operation.

Finally, we selected 80 embroidery samples, including three themes: plants, animals and characters, and selected representative experimental results from the three themes appearing in the illustrations of this study during the experiment.

Image colour preprocessing

Step1: Adjustment of colour levels.

In the experiment, since some old objects and the dim pattern colours would affect the discrimination of the pattern contour, the step of colour level adjustment was added in the pretreatment. This is one of the innovations of the method proposed in this study. The effect was to overcome the colour concentration and increase the difference between colours. This treatment achieved visual beauty, and improved the calculation gradient in contour extraction.

Because the black field and white field in the histogram [19] of 40 images in the colour experiment are different, the black field does not start from 0 and the white field does not end at 255. Therefore, it is necessary to move the two fields of each image to the corresponding starting and ending positions. In the processing, the total number of pixels is counted according to the size of each image, then the number of pixels is accumulated in turn by the means of following the direction in which the grey value increases from zero.

When the number of times is greater than the threshold, the pixel point is given a new grey value of 0. The same method is applied to accumulate the gray value of the white field, starting following the direction in which the gray value decreases from zero. Two relevant equations, 1 and 2, are presented: sum1+=hist(i)s.t.sum1η×w×hminlevel=i \matrix{{{\rm{su}}{{\rm{m}}_1} + = {\rm{hist}}({\rm{i}})} \cr {{\rm{s}}.{\rm{t}}.{\rm{su}}{{\rm{m}}_1} \ge \eta \times w \times h} \cr {{\rm{minlevel}} = {\rm{i}}} \cr} sum2+=hist(256i)s.t.sum2η×w×hmaxlevel=256i \matrix{{{\rm{su}}{{\rm{m}}_2} + = {\rm{hist}}(256 - {\rm{i}})} \cr {{\rm{s}}.{\rm{t}}.{\rm{su}}{{\rm{m}}_2} \ge \eta \times w \times h} \cr {{\rm{maxlevel}} = 256 - {\rm{i}}} \cr}

Where i ranges from 0 to 255; sum1 is an intermediate quantity or count of the number of times the gray value appears; hist(i) is the number of times the gray value appears in i; η is the set threshold (=0.01); w and h represent the width and height of the picture, respectively, and minlevel and maxlevel represent the gray values of the new white and black fields in the original image, respectively. Finally, image A is obtained shown in Figure 4. In our experiment, each image to be manipulated is called the original image, which will be called image A after the adjustment of colour levels, as shown in the figure below, and image A is an epithet which exists in the form of a three-dimensional matrix in the processing.

Fig. 4

Comparison of images and histograms before and after the scale

Step 2: Using Gaussian filtering to reduce noise.

This step traverses each pixel point (x, y) in the image with a 3×3 Gaussian mask of an integer shape. Equation 3 is as follows: G(x,y)=12πσ2ex2+y22σ2 {\rm{G}}\left({{\rm{x}},{\rm{y}}} \right) = {1 \over {2\pi {\sigma ^2}}}{{\rm{e}}^{- {{{{\rm{x}}^2} + {{\rm{y}}^2}} \over {2{\sigma ^2}}}}}

Where sigma is Gaussian standard deviation, which has a certain influence on the effect of filtering. The coordinates of the relative centre point (x, y) in the Gaussian mask are represented in the equation. In the process of traversal, the noise in image A from Step 1 is filtered out to form a smoother image A' (Figure 5b). Equation 4 is as follows: A'=[121242121]*A {{\rm{A}}^{'}} = \left[{\matrix{1 & 2 & 1 \cr 2 & 4 & 2 \cr 1 & 2 & 1 \cr}} \right]*{\rm{A}}

Fig. 5

Image contrast before and after Gaussian filtering, sharpening and smoothing

Regardless of the image, the form of filter is the same, because in other processing, it has an adaptive function for different images.

Step 3: Image sharpening.

Because of the characteristics of embroidery and the influence of photography, some transitional pixels are produced between the pattern and the background, which constitute a large number of gradual colours, resulting in the extraction of useless colours from colour recognition. In the experiment, yellow and light red are the most obvious on the black bottom plate. Because the edge of the image is too large, the number of gradual discolouration is large as well, and finally the colour that does not exist in the image will appear during colour recognition. Therefore, we used a sharpening mask to minimise the transition pixel points as much as possible to compensate for the image contour, enhance the image edge and make the edge clearer.

Image A' obtained from the previous step is sharpened to form a new image B (Figure 5c). Equation 5 is as follows: B=[010151010]*A' {\rm{B}} = \left[{\matrix{0 & {- 1} & 0 \cr {- 1} & 5 & {- 1} \cr 0 & {- 1} & 0 \cr}} \right]*{{\rm{A}}^{'}}

Step 4: Colour smoothing.

For knitted embroidery patterns, when only Gauss filtering is used to remove noise, the pattern still has the embroidery thread inside the staggered texture. But in fact, they belong to a whole, which is detrimental in both colour extraction and contour extraction. Hence, it is also necessary to neutralize the similar colour by smoothing the colour level, thus reducing the extraction error.

A 5-dimensional iteration space (x, y) is created on physical coordinates and the 3D vectors of any colour space (e.g. the blue value of the RGB space, Lab space three channel values). Each time, a vector that satisfies the constraints which is in the 5-dimensional iteration space is added to form an offset vector, which constantly updates the coordinates of points until the coordinate change of the point to be updated is ignored. The final results are shown in Figure 5d. The points satisfying the following conditions in this 5-dimensional space are the basis of the synthesis vector of the update point. Equation 6 is as follows: XfxX+fYfyY+f(R,G,B)(r,g,b)c \matrix{{{\rm{X}} - {\rm{f}} \le {\rm{x}} \le {\rm{X}} + {\rm{f}}} \cr {{\rm{Y}} - {\rm{f}} \le {\rm{y}} \le {\rm{Y}} + {\rm{f}}} \cr {\left\| {\left({{\rm{R}},{\rm{G}},{\rm{B}}} \right) - \left({{\rm{r}},{\rm{g}},{\rm{b}}} \right)} \right\| \le {\rm{c}}} \cr}

Where f and c are the setting values of the physical space range and the colour space range, respectively; x, y and r, g, b are the physical coordinates and colour 3D vector of the point to be updated, respectively; X,Y are the physical coordinates that satisfy the surrounding constraints, and R,G,B are a three-dimensional vector of colour around which the limiting conditions are satisfied.

K-means++ clustering

Objects could be clustered by their colour [20]; K-means is one of the classical algorithms in clustering [21], which is applied to separate the image background and edge [22]. Compared with K-means, we only need to select a sample point randomly as the initial cluster center C1 when we use K-means++ [23]. First, the shortest distance between each sample and the current existing cluster center, is calculated by D(x). Then the probability that each sample point is selected as the next cluster center is calculated. Equation 7 is as follows: D(x)2i=1nD(xi)2 {{{\rm{D}}{{\left({\rm{x}} \right)}^2}} \over {\sum\nolimits_{{\rm{i}} = 1}^{\rm{n}} {D{{\left({{{\rm{x}}_{\rm{i}}}} \right)}^2}}}}

Finally, according to the roulette method, the next cluster center is selected. The sample point farther away from the existing cluster center point is easier to be selected and is more consistent with the difference between classes. The roulette method is repeated until the cluster center equal to the number of K values is selected.

The next operation is the same as the iterative principle of K-means. Assuming that the cluster (Ci) is divided into (C1, C2,……Ck), our goal is to minimise the square error E (Equation 8): E=i=1kxCixμi22 {\rm{E}} = \sum\nolimits_{{\rm{i}} = 1}^{\rm{k}} {\sum\nolimits_{{\rm{x}} \in {{\rm{C}}_{\rm{i}}}} {\left\| {{\rm{x}} - {\mu _{\rm{i}}}} \right\|_2^2}}

Where x is the 3D vector of RGB values in the picture, and μi is the (Ci) mean vector. μi is computed as follows (Equation 9): μi=1|Ci|xCix {\mu _{\rm{i}}} = {1 \over {\left| {{{\rm{C}}_{\rm{i}}}} \right|}}\sum\nolimits_{{\rm{x}} \in {{\rm{C}}_{\rm{i}}}} {\rm{x}}

K-means++ clustering is more consistent with the distribution of sample data than K-means. The intraclass similarity and interclass differentiation are both higher. We need to set the K value according to the number of colours in the selected image. If there are 12 colours with an obvious colour difference in an image, we need to set K=12. After the operation, 12 colours can be identified and arranged from more to less according to the number of pixels of the same colour (Figure 6).

Fig. 6

Colour identified by K-means++ clustering algorithm

Canny operator extraction

Canny first proposed a calculation method of image edge detection using the Canny operator [24]. Experiments in recent years show that the best segmentation effect for a printed fabric pattern is to use the Canny operator [25]. Hence, the Canny operator is chosen to detect the edge profile of an embroidery pattern. The methodology consists of three steps.

Step1: Two sets of 5×5 Sobel operators are used to traverse the image from the X-axis and Y-axis, respectively, and the preliminary contour is extracted. Equation 10 is as follows: Gx=[120214808461201224808412021]*AandGy=[146412812820000048128414641]*A \matrix{{{{\rm{G}}_{\rm{x}}} = \left[{\matrix{{- 1} & {- 2} & 0 & 2 & 1 \cr {- 4} & {- 8} & 0 & 8 & 4 \cr {- 6} & {- 12} & 0 & {12} & 2 \cr {- 4} & {- 8} & 0 & 8 & 4 \cr {- 1} & {- 2} & 0 & 2 & 1 \cr}} \right]*{\rm{A}}} \cr {{\rm{and}}} \cr {{{\rm{G}}_{\rm{y}}} = \left[{\matrix{1 & 4 & 6 & 4 & 1 \cr 2 & 8 & {12} & 8 & 2 \cr 0 & 0 & 0 & 0 & 0 \cr {- 4} & {- 8} & {- 12} & {- 8} & {- 4} \cr {- 1} & {- 4} & {- 6} & {- 4} & {- 1} \cr}} \right]*{\rm{A}}} \cr}

Where Gx and Gy represent images of the X-axis and Y-axis, respectively, and A is the image processed by the Gaussian filter.

Step 2: The amplitude matrix G (Equation 11) and angle matrix θ of the image (Equation 12) are calculated to suppress the non-maximum value: G=Gx2+Gy2 {\rm{G}} = \sqrt {{\rm{G}}_{\rm{x}}^2 + {\rm{G}}_{\rm{y}}^2} θ=atan2(Gy,Gx) \theta = {\rm{atan}}2\left({{{\rm{G}}_{\rm{y}}},{{\rm{G}}_{\rm{x}}}} \right)

By comparing the size of the corresponding domain values in the 3×3 domain, the pixel points and the two adjacent pixel points at the corresponding angle are compared to eliminate the non-maximum value (Figure 7).

Step 3: With a double threshold algorithm, a high threshold (1200) and low threshold (200) are set to make the contour more accurate during the experiment. The gradient amplitude higher than 1200 belongs to the profile, and the gradient amplitude lower than 200 is excluded. The gradient amplitude between the two is needed to be judged. That connected with the determined contour belongs to the contour, and that not connected with the determined contour should be excluded. The final results are shown in Figure 8.

Fig. 7

Comparison of the size of the corresponding domain values within the 3X3 domain

Fig. 8

Contour extracted by Canny operator

Experimental Results and Discussion
Experimental results

To further verify the applicability of the method, we selected a traditional Qing Dynasty women's clothing, and took four adjacent embroidery patterns for processing (Figure 9).

Fig. 9

Four embroidery patterns selected from women's traditional clothing of the Qing Dynasty

The preprocessing is done in the steps of colour level adjustment, Gaussian filtering, colour sharpening and smoothing, and K-means++ clustering. Finally, the preprocessing of extracting the contour is completed according to the steps of colour level adjustment, Gaussian filtering, and Canny operation. Colour extraction results are shown in Figure 10b. The set value of the colour quantity K is indicated in the diagram.

Fig. 10

Colour recognition and edge detection of restored samples

Fig. 11

Duplication of embroidery patterns of the Qing Dynasty

Colour extraction by this method avoids the obstacle of manual colour acquisition in Figure 1. Hence, the embroidery pattern colour extracted based on K-means++ clustering has a better effect. Meanwhile, the contour extracted by this method can also overlap perfectly on the image.

Finally, the extracted contour is filled in with the extracted colour, and a final image is produced by the computer embroidery machine in comparison with the original image. The colour and contour extracted by this new method are very close to the original image. Even on this basis, optimisation is carried out (Figure 15).

Discussion
Colour level adjustment and sharpening

Forty images were randomly selected from 80 images to identify colour for the first time. First, the Gaussian filter and smooth colour were used for preprocessing, and then K-means++ clustering was carried out, which cannot correctly identify all colours. Secondly, the steps of pretreatment were improved. The adjusted colour levels were added before Gaussian filtering, and then K-means++ clustering was carried out. The result of colour identification is shown in Figure 12b.

Fig. 12

Comparison of image colour recognition before and after adding colour levels and sharpening

When the grey value in the histogram is relatively concentrated, the colour is recognized by K-means++ clustering, and the resultant saturation is low, and the sense of colour to the designer is weak when used as the design material. After we move the black field and the white field to adjust the colour levels, we can improve the colour saturation, identify more bright colours, and better use them in the design.

Sharpening was added after Gaussian filtering in the third experiment, and then K-means++ clustering was carried out (Figure 12c). Thus, after adding ‘sharpening’, the recognized colour is closer to the original image both in hue and quantity, especially for the image with a large difference between the target and background colours.

The same 40 images were used in the contour extraction experiment. In the first experiment, the Canny operator was adopted to extract contour directly after Gaussian filtering (Figure 13). In the second experiment, the step of colour level adjustment was added before Gaussian filtering (Figure 13c). Results show the edge lines detected in the second experiment are more coherent, and fewer meaningless points and lines are detected, which are more consistent with the contour edges of the original image (Figure 13d). When the grey value in the histogram is relatively concentrated, the change between adjacent pixels is not very obvious, which will affect the recognition of the image contour. After the black field and the white field are moved to adjust the colour levels, the original grey value is stretched, the concentrated grey value scattered, and the grey value difference between adjacent pixels is increased, which makes contour identification easier.

Fig. 13

Image contour extraction and comparison before and after adjustment of colour levels

Because the image acquisition was from ancient Chinese embroidery, a part of the physical colours are dim due to age or the influence of environmental illumination, which slightly affect the hue and saturation and thus are not conducive to colour and contour extraction. After the above adjustment, the colour and edge of the image can be optimised, which is more conducive to colour and contour extraction.

Colour space selection

Colour extraction experiments from multiple embroidery images in RGB, HSV and LAB colour spaces were conducted using K-means++ clustering. Results show the RGB space is more suitable for the colour extraction of ancient Chinese embroidery images (Figure 14).

Fig. 14

Comparison of image colour recognition in different colour spaces

Thus, the use of the HSV space is prone to obvious errors, while the use of LAB and RGB spaces leads to small errors and similar results. However, the use of the LAB space requires two colour conversions, while that of the RGB space with batch image processing avoids colour conversion and, hence, returns results faster. Thus, we chose the RGB space in later experiments.

Parameter selection

K-means++ clustering parameters were set according to the number of colours in the image and thus were different among images. After many experiments, we tried to combine a variety of high and low thresholds and finally selected 1200 and 200, respectively, which are more suitable for the calculation of embroidery images. Contour extraction after the threshold setting can avoid the extraction of redundant contours (Figure 15).

Fig. 15

Image contour extraction before and after setting threshold

Conclusion

In this study, an effective colour clustering and edge detection method is proposed to recognise colour and contours from ancient Chinese embroidery patterns. It innovatively adds colour level adjustment and sharpening in the process of image preprocessing, which can effectively recognise the colour information of three channels in RGB spaces of an embroidery image taken by digital camera, smartphone photography or scanned by portable scanner. After preprocessing, K-means++ clustering is used for colour clustering. Then the contour is extracted by the Canny operator.

The new method enables accurate and intelligent recognition of colours and contours from embroidery images and can be used in colour collocation and element application in a wider design field. On this basis, we will make a mobile app and web tools for designers to extract the colours and contours of ancient Chinese embroidery patterns, solve the defects of difficult colour clustering and slow contour extraction faced by designers when using computer-aided graphics software to extract the colour and contour manually, and assist designers to extract the colours and contours from ancient Chinese embroidery patterns more quickly and conveniently.

This study discussed whether colour level adjustment and sharpening should be added to the preprocessing. The colour and contour extracted from embroidery images were optimised at a high threshold =1200 and low threshold =200 of the Canny operator. Therefore, the new method can effectively extract the colours and contours of ancient Chinese embroidery patterns at low calculation cost. It lays a foundation for digital storage and innovative application of embroidery patterns.

In the future, this method will be further utilised to extract the colours and contours of a brocade pattern. We will also use computer deep learning to extract a number of embroidery patterns that can reflect embroidery stitches and form an embroidery stitch library, which will be used to identify stitches in embroidery images with high pixel clarity. APPs will be developed to help people outside of the textile industry to better identify and understand embroidery stitches.