1. bookAHEAD OF PRINT
Zeitschriftendaten
License
Format
Zeitschrift
eISSN
2300-0929
Erstveröffentlichung
19 Oct 2012
Erscheinungsweise
4 Hefte pro Jahr
Sprachen
Englisch
Open Access

Animal fiber recognition based on feature fusion of the maximum inter-class variance

Online veröffentlicht: 05 Dec 2022
Volumen & Heft: AHEAD OF PRINT
Seitenbereich: -
Zeitschriftendaten
License
Format
Zeitschrift
eISSN
2300-0929
Erstveröffentlichung
19 Oct 2012
Erscheinungsweise
4 Hefte pro Jahr
Sprachen
Englisch
Introduction

Cashmere has become one of the most popular raw materials in the textile industry because of its comfortable hand feel and excellent thermal insulation [1]. However, the international cashmere market is facing great challenges. The cashmere and wool fibers have similar appearance and touch, and they are mainly identified according to their scale characteristics [2]. Sometimes, merchants hide the lower-priced wool in cashmere products, which affects the stable operation of the textile industry market [3]. At present, the identification of cashmere and wool mainly uses the human eye to observe the microscopic pictures of the two [4]. This is an extremely time-consuming and error-prone work, and the result of identification is not reliable. Thus, in order to meet the need of cashmere products in the market, it is necessary to develop an automatic classification system that can identify the fiber of cashmere and wool quickly and precisely.

In recent years, many workers have applied the image processing technology to the recognition of cashmere and wool. In ref. [5], a method of the recognition of cashmere and wool fibers has been determined, which uses eight morphological features based on the difference between cashmere and wool fiber scale. In ref. [6], a Bayesian model for recognizing the cashmere and wool has been established using the three morphological features of fiber. In the identification system of cashmere and wool which is based on the different morphological features of fiber, because of the complex background of collected images, sometimes it is difficult to obtain the morphological feature of fiber. The recognition method of texture features has high recognition accuracy based on the objective expression of fiber features. The scale texture of the fiber image is classified by the feature extraction model established by the image processing method.

In ref. [7], the cashmere and wool were identified by abstracting the Tamura texture features. In ref. [8], the morphological and textural features of fiber were extracted by the gray-level co-occurrence matrix algorithm and interactive measurement algorithm and then the fibers of cashmere and wool were identified quickly and accurately by using the K-means clustering algorithm. In ref. [9], the cashmere fiber and wool fiber were classified by the textural features obtained from the spatial and frequency domains of the fiber images. In ref. [10], the four sub-images of wavelet decomposition were analyzed by the Gaussian Markov random field model and the classification results of cashmere and wool images were reported by generating eight-dimensional texture features from sub-images. In ref. [11], the extracted speed up robust features (SURF) are applied to transform the original image into the corresponding high-dimensional feature vector, and finally it classifies the feature.

Deep learning technology is favored for its good performance in classification. In recent years, some deep learning architectures for cashmere and wool classification have been developed. In ref. [12], a vision-based fiber recognition framework based on image segmentation and deep convolution neural network was proposed. This method can be used to segment overlapping and adhesive translucent fibers. Luo et al. [13] proposed a residual network method to identify cashmere and cashmere fiber. The results show that the proposed residual network model with 18 weight layers has the highest accuracy. In ref. [14], it applied four pre-trained convolutional neural networks (including AlexNet, VGG-16, VGG-19, and GoogLeNet) to transfer learning, evaluating the performance difference between the four network architectures to identify similar fibers. The convolutional neural network (CNN) system shows good classification accuracy in the classification of similar fibers of cashmere and wool. However, because the system requires a large amount of sample data to train the network model, it cannot achieve a good recognition accuracy with a limited amount of data.

In this article, we propose a feature fusion method of the wavelet multi-scale image based on the maximum inter-class variance, which is used to obtain good features for distinguishing cashmere and wool fibers, while reducing the demand for data volume and the computational complexity of the algorithm. This method fuses the Tamura texture features extracted through the Discrete Wavelet Transform. At present, wavelet transform is widely used in various image classification and image fusion technologies [1516]. Compared with traditional texture feature extraction methods, the wavelet transform of images based on multi-scale decomposition has shown successful results in expressing the details of images and classifying image texture features. In this article, we propose a method for the classification of cashmere and wool using discrete wavelets for multi-scale feature fusion of fiber images. Section 2 gives the details of the proposed classification method of cashmere and wool. The analysis of the experimental results is presented in Section 3, and Section 4 is the conclusion.

Proposed method

Due to the differences in growth environment, breeding methods, and matching breeding, cashmere coarsening has become very common, and the distinction between cashmere and wool has become more complex. In this work, an automatic recognition method for cashmere and wool is proposed. It extracts the texture features of fiber images through image processing technology and fuses them into feature vectors that can distinguish them significantly according to the idea of maximum inter-class variance. The system block diagram to be implemented is shown in Figure 1. First, the collected source image of fiber is enhanced and cropped to reduce the influence of background on feature extraction and obtain the target area of the image for feature extraction. Then, the Tamura texture features of each sub-image decomposed by the Discrete Wavelet Transform of the target image are extracted. Afterward, the optimal weight is introduced for feature fusion based on the method of maximum inter-class variance, and then the fused features are normalized. Finally, support vector machine (SVM) is used for classification to obtain the final recognition accuracy.

Figure 1

The flow chart of cashmere and wool classification method.

Details of the dataset

In this study, cashmere and wool fibers from northern China are used as raw materials for the experiment, and the experimental images obtained from cashmere and wool fibers by scanning electron microscope are used as the data set of this article. The cashmere and wool fiber images are 400 images each in this study, and the magnification is set to 1,000 times in the SEM, and the horizontal and vertical resolutions are set to 96 dpi, respectively. In Figure 2(a) and (b), microscopic images of cashmere and wool fibers magnified by scanning electron microscopy are shown, respectively. Image processing and feature extraction are implemented in Python. The data set is divided into two groups, one is used for the training set, as the cross-validation of the system, and the other is used as the test set to evaluate the final recognition effect.

Figure 2

Fiber image provided by scanning electron microscope: (a) cashmere and (b) wool.

Preprocessing

In order to reduce the interference information and obtain the detailed information of the target fiber, the texture of the fiber image is first enhanced by the adaptive texture enhancement algorithm, and then these images are binarized by the Otsu threshold method [17]. This part mainly distinguishes the image background from the original fiber area, and then it fills the area of interest and removes noise. Then it calculates the angle that needs to be rotated after obtaining the center axis of the fiber to facilitate subsequent cropping operations. Finally, the fiber-filled image is used as a template, and the original image is cropped to obtain the target area for subsequent fiber feature extraction. This process is shown in Figure 3.

Figure 3

Image preprocessing process: (a) original image, (b) enhancement, (c) binarization, (d) fill the image, (e) remove outliers, (f) central axis, (g) rotation angle, (h) rotate, and (i) crop.

Multi-scale decomposition of wavelet analysis

Wavelet transform is called image microscope in image processing. Its ability of the multi-scale decomposition can decompose and separate the image information layer by layer through low-pass and high-pass filters [18]. The Discrete Wavelet Transform of image is to use a series of wavelets with different scales to decompose the original image signal into multiple detail images and approximate images. In this article, the Haar wavelet is used to decompose the cashmere and wool fiber images. The Haar scale function is expressed by the following equation [19]: ϕ ( x ) = 1 , for 0 x < 1 , 0 , otherwise . ϕ i j ( x ) = ϕ ( 2 j x i ) , i = 0 , , 2 j 1 \phi (x)=\left\{\phantom{\rule[-1.25em]{}{0ex}}\begin{array}{ll}1,\hspace{1em}\text{for}\hspace{.25em}0\le x\lt 1,& \\ 0,\hspace{1em}\text{otherwise}.& \end{array}\right.\hspace{1em}{\phi }_{i}^{j}(x)=\phi ({2}^{j}x-i),\text{}i=0,\mathrm{..}.\hspace{-0.25em},\hspace{0.25em}{2}^{j}-1 The Haar wavelet function is expressed by the following equation: ψ i j ( x ) = 1 , for 0 x < 1 2 , 1 , for 1 2 x < 1 , 0 , otherwise . ψ i j ( x ) = ψ ( 2 j x i ) , i = 0 , , 2 j {\psi }_{i}^{j}(x)=\left\{\begin{array}{c}1,\hspace{1em}\text{for}\hspace{.25em}0\le x\lt \frac{1}{2},\\ -1,\hspace{0.25em}\text{for}\hspace{0.25em}\frac{1}{2}\le x\lt 1,\\ 0,\hspace{1em}\text{otherwise}\text{.}\end{array}\right.\hspace{1em}{\psi }_{i}^{j}(x)=\psi ({2}^{j}x-i),\text{}i=0,\mathrm{..}.\hspace{-0.25em},\hspace{0.25em}{2}^{j} The approximate and detail sub-images of the original cashmere and wool fiber images obtained by the Haar wavelet decomposition are shown in Figure 4.

Figure 4

Wavelet multi-scale decomposition of cashmere image.

Feature fusion based on the maximum inter-class variance

In order to make full use of the image information in low-frequency and high-frequency domain, the Tamura texture feature is extracted through the detailed and approximate sub-images which are obtained after the wavelet decomposition [20], which will better characterize the difference of fiber scales. Constructing the texture feature vectors of cashmere and wool with the largest divisibility is based on the feature fusion using the maximum inter-class variance. Assuming that after the original image is subsampled, the feature vector of the approximate image is F LL = [ f LL 1 , f LL 2 , f LL 3 ] {F}_{\text{LL}}={[}{f}_{\text{LL}1},\hspace{.25em}{f}_{\text{LL}2},\hspace{.25em}{f}_{\text{LL}3}] . Suppose the feature vector is obtained by the sub-image on the horizontal detail, its formula is F LH = [ f LH 1 , f LH 2 , f LH 3 ] {F}_{\text{LH}}={[}{f}_{\text{LH}1},\hspace{.25em}{f}_{\text{LH}2},\hspace{.25em}{f}_{\text{LH}3}] . The eigenvector is represented by F HL = [ f HL 1 , f HL 2 , f HL 3 ] {F}_{\text{HL}}={[}{f}_{\text{HL}1},\hspace{.25em}{f}_{\text{HL}2},\hspace{.25em}{f}_{\text{HL}3}] , if it is acquired from the sub-image on the vertical detail. Assume that the eigenvector obtained from the sub-image on the diagonal detail is F HH = [ f HH 1 , f HH 2 , f HH 3 ] {F}_{\text{HH}}={[}{f}_{\text{HH}1},\hspace{.25em}{f}_{\text{HH}2},\hspace{.25em}{f}_{\text{HH}3}] . Because the features extracted from the decomposed sub-images are the surface roughness of fiber scales and contrast and linearity, these are used as feature descriptors for the original fibers. Therefore, the feature vectors with the maximum inter-class variance are obtained by linear fusion. By linearly adding features extracted from the four sub-images after introducing appropriate weights, we obtain the F with three-dimensional feature vectors, which can be expressed by the following equation: F = a F LL + b F LH + c F HL + d F HH , F=a\cdot {F}_{\text{LL}}+b\cdot {F}_{\text{LH}}+c\cdot {F}_{\text{HL}}+d\cdot {F}_{\text{HH}}, where a + b + c + d = 1 a+b+c+d=1 , the first feature represents roughness, the second one means contrast, and the third is linearity in each feature vector.

For obtaining the optimal feature fusion coefficient, the metric is introduced to measure the contrast of inter-class variance and intra-class variance. Its representation is shown in the following equation: λ = δ ex 2 δ in 2 , \lambda =\frac{{\delta }_{\text{ex}}^{2}}{{\delta }_{\text{in}}^{2}}, where δ in 2 {\delta }_{\text{in}}^{2} represents the intra-class variance of cashmere and wool fiber images calculated from the data of the experimental training set, and δ ex 2 {\delta }_{\text{ex}}^{2} represents the inter-class variance. Their expression formulas are as follows: δ in 2 = 1 N 1 i = 1 N 1 ( x i μ 1 ) 2 + 1 N 2 j = 1 N 2 ( x j μ 2 ) 2 , {\delta }_{\text{in}}^{2}=\sqrt{\frac{1}{{N}_{1}}\mathop{\sum }\limits_{i=1}^{{N}_{1}}{({x}_{i}-{\mu }_{1})}^{2}+\frac{1}{{N}_{2}}\mathop{\sum }\limits_{j=1}^{{N}_{2}}{({x}_{j}-{\mu }_{2})}^{2}}, δ ex 2 = 1 N 1 + N 2 l = 1 N 1 + N 2 ( x l μ ) 2 , {\delta }_{\text{ex}}^{2}=\frac{1}{{N}_{1}+{N}_{2}}\mathop{\sum }\limits_{l=1}^{{N}_{1}+{N}_{2}}{({x}_{l}-\mu )}^{2}, where μ 1 {\mu }_{1} and μ 2 {\mu }_{2} are, respectively, the mean value of cashmere features and wool features in the training set, while μ \mu is the mean value of the whole sample. In addition, N 1 {N}_{1} represents the number of cashmere in the training set and N 2 {N}_{2} means the number of wool. x i {x}_{i} and x j {x}_{j} are the eigenvalue of cashmere and wool, respectively. When λ \lambda takes the maximum value, the feature vector with the maximum inter-class variance can be obtained to distinguish cashmere and wool. The experimental results show that at this time a = 0.3 , b = 0.1 , c = 0.2 , d = 0.4 . a=0.3,\hspace{.25em}b=0.1,\hspace{.25em}c=0.2,\hspace{.25em}d=0.4.

Classifier

In the last step of the whole cashmere and wool recognition system, different classifiers are used to verify the fused feature performance for ten-fold cross-validation. These classifiers are K-nearest neighbors (KNN), support vector machine (SVM), and linear discriminant analysis (LDA) classifiers, respectively. KNN classifies the query points according to the similarity between the query points and their nearest K search points [21], SVM classifies the features by searching the optimal hyperplane in the space [22], and LDA classifies the features by obtaining the best projected line [23].

Experimental results and analysis

In this work, the size of the collected fiber images does not affect the final characteristics of the fibers, so the size of each image is cropped to 256 × 256 pixels for subsequent processing, and then the grayscale operation is performed on these images. In order to obtain the feature vectors with the maximum inter-class variance, the features extracted from the four sub-images of wavelet decomposition are fused linearly by introducing the weight, and the metric factor λ is introduced to measure the maximum separability of the extracted features. The results show that the maximum inter-class variance is obtained when the fusion coefficient of the low-frequency approximate sub-image and the high-frequency horizontal component sub-image is 0.3 and 0.1, respectively, and the fusion coefficient of the high-frequency vertical component sub-image and diagonal component sub-image is 0.2 and 0.4, respectively, as shown in Table 1.

Difference between intra-class and inter-class variance

Class Image Intra-class Inter-class λ
Cashmere 0.227 0.375 0.563
Wool 0.217

The recognition accuracy of the extracted features of each sub-image is evaluated using SVM, as shown in Table 2. The results show that when the diagonal high-frequency detail information is used to characterize the image, and the extracted features obtain the highest recognition accuracy, the difference between the detailed features in the diagonal direction of the cashmere and wool fiber scales is obvious.

Performance measure values of original and sub-image features

Features Accuracy (%) Precision (%) Recall (%)
Original 88.47 88.16 88.71
Approximate 82.86 82.04 83.40
Horizontal 78.06 75.92 79.32
Vertical 79.90 77.76 81.24
Diagonal 84.29 83.88 84.57

To visualize the extracted high-dimensional feature data, map the multi-dimensional data to a two-dimensional plane. Figure 5 shows the features of the target area and different sub-images. It also reflects the separability between features after linear fusion based on the maximum inter-class variance. The performance measures are used in this article, which are accuracy, precision, recall, F1-score, and misclassification error (MCE).

Figure 5

Comparison of the distribution of different feature vectors.

The system evaluates the extracted features by using several typical classifiers for cross-validation. Using the features fused by introducing weights, different classifiers are evaluated through ten-fold cross-validation in this article, as shown in Table 3. The results show that the SVM with Gaussian kernel has the highest recognition rate. It shows 95.20% precision, 95.86% recall, and 96.16% F1-score.

Performance measure values of different classifiers

Classifier Accuracy (%) Precision (%) Recall (%) F1-score (%) MCE (%)
KNN 92.35 91.84 92.78 92.31 7.65
LDA 91.12 90.20 91.89 91.04 8.88
SVM 95.20 96.46 95.86 96.16 4.80

On the basis of the same ratio of the training set and test set, the various existing methods of feature extraction and the method used in this article are evaluated. As shown in Figure 6, the method in this article shows a high recognition rate in classifying cashmere and wool fibers.

Figure 6

Comparison with the existing method.

Conclusion

In this article, an automatic classification method for cashmere and wool fibers is proposed. The method mainly obtains the feature vectors with the largest discriminative degree through the method of feature fusion based on the maximum inter-class variance. First, the original image is preprocessed to obtain the target fiber region, so as to reduce the impact of interference information such as background and noise. Then, the image is decomposed by the Discrete Wavelet Transform to obtain sub-images with approximate information and detail information of the image, and then the Tamura texture features are extracted from them, respectively. Next, these weights of the feature fusion are determined, and the feature vectors with the maximum inter-class variance are input into the classifier. By using different classifiers to evaluate the performance of the extracted features, it shows that the SVM classifier has the highest accuracy with an accuracy of 95.20%. The results indicate that the optimal feature vector of the cashmere and wool is obtained by fusing these features linearly, improving the recognition accuracy of fibers. However, there are still some shortcomings of this method, such as the small amount of sample data and the experimental errors. Therefore, more diverse fiber samples will be collected on the recognition of animal fibers in the future.

Figure 1

The flow chart of cashmere and wool classification method.
The flow chart of cashmere and wool classification method.

Figure 2

Fiber image provided by scanning electron microscope: (a) cashmere and (b) wool.
Fiber image provided by scanning electron microscope: (a) cashmere and (b) wool.

Figure 3

Image preprocessing process: (a) original image, (b) enhancement, (c) binarization, (d) fill the image, (e) remove outliers, (f) central axis, (g) rotation angle, (h) rotate, and (i) crop.
Image preprocessing process: (a) original image, (b) enhancement, (c) binarization, (d) fill the image, (e) remove outliers, (f) central axis, (g) rotation angle, (h) rotate, and (i) crop.

Figure 4

Wavelet multi-scale decomposition of cashmere image.
Wavelet multi-scale decomposition of cashmere image.

Figure 5

Comparison of the distribution of different feature vectors.
Comparison of the distribution of different feature vectors.

Figure 6

Comparison with the existing method.
Comparison with the existing method.

Performance measure values of different classifiers

Classifier Accuracy (%) Precision (%) Recall (%) F1-score (%) MCE (%)
KNN 92.35 91.84 92.78 92.31 7.65
LDA 91.12 90.20 91.89 91.04 8.88
SVM 95.20 96.46 95.86 96.16 4.80

Performance measure values of original and sub-image features

Features Accuracy (%) Precision (%) Recall (%)
Original 88.47 88.16 88.71
Approximate 82.86 82.04 83.40
Horizontal 78.06 75.92 79.32
Vertical 79.90 77.76 81.24
Diagonal 84.29 83.88 84.57

Difference between intra-class and inter-class variance

Class Image Intra-class Inter-class λ
Cashmere 0.227 0.375 0.563
Wool 0.217

Luo, J., Lu, K., Zhang, B., Zhang, Y., Chen, Y., Tian, J. (2021). Current status and progress of identification methods for cashmere and wool fibers. Wool Textile Journal, 48(10), 112–117. Luo J. Lu K. Zhang B. Zhang Y. Chen Y. Tian J. 2021 Current status and progress of identification methods for cashmere and wool fibers Wool Textile Journal 48 10 112 117 Search in Google Scholar

Xing, W., Xin, B., Deng, N., Chen, Y., Zhang, Z. (2019). A novel digital analysis method for measuring and identifying of wool and cashmere fibers. Measurement, 132, 11–21. Xing W. Xin B. Deng N. Chen Y. Zhang Z. 2019 A novel digital analysis method for measuring and identifying of wool and cashmere fibers Measurement 132 11 21 10.1016/j.measurement.2018.09.032 Search in Google Scholar

Zhong, Y., Lu, K., Tian, J., Zhu, H. (2017). Wool/cashmere identification based on projection curves. Textile Research Journal, 87(14), 1730–1741. Zhong Y. Lu K. Tian J. Zhu H. 2017 Wool/cashmere identification based on projection curves Textile Research Journal 87 14 1730 1741 10.1177/0040517516658516 Search in Google Scholar

Lin, P. (2019). Detection method for distinguishing wool from cashmere. Western Leather, 41(18), 160. Lin P. 2019 Detection method for distinguishing wool from cashmere Western Leather 41 18 160 Search in Google Scholar

Ma, C., Liu, X., Liu, F. (2014). A research on cashmere automatic identification method based on statistical analysis. Wool Textile Journal, 42(10), 62–64. Ma C. Liu X. Liu F. 2014 A research on cashmere automatic identification method based on statistical analysis Wool Textile Journal 42 10 62 64 Search in Google Scholar

Xing, W., Liu, Y., Deng, N., Xin, B., Wang, W., Chen, Y. (2020). Automatic identification of cashmere and wool fibers based on the morphological features analysis. Micron, 128(102768), 1–8. Xing W. Liu Y. Deng N. Xin B. Wang W. Chen Y. 2020 Automatic identification of cashmere and wool fibers based on the morphological features analysis Micron 128 102768 1 8 10.1016/j.micron.2019.10276831655186 Search in Google Scholar

Yuan, S., Lu, K., Zhong, Y. (2016). Identification of wool and cashmere based on texture analysis. Key Engineering Materials, 671, 385–390. Trans Tech Publications Ltd. Yuan S. Lu K. Zhong Y. 2016 Identification of wool and cashmere based on texture analysis Key Engineering Materials 671 385 390 Trans Tech Publications Ltd 10.4028/www.scientific.net/KEM.671.385 Search in Google Scholar

Xing, W., Deng, N., Xin, B., Wang, Y., Chen, Y., Zhang, Z. (2019). An image-based method for the automatic recognition of cashmere and wool fibers. Micron, 141, 102–112. Xing W. Deng N. Xin B. Wang Y. Chen Y. Zhang Z. 2019 An image-based method for the automatic recognition of cashmere and wool fibers Micron 141 102 112 10.1016/j.measurement.2019.04.015 Search in Google Scholar

Zhu, Y., Huang, J., Wu, T., Ren, X. (2020). Identification method of cashmere and wool based on texture features of GLCM and Gabor. Journal of Engineered Fibers and Fabrics, 16, 1–7. Zhu Y. Huang J. Wu T. Ren X. 2020 Identification method of cashmere and wool based on texture features of GLCM and Gabor Journal of Engineered Fibers and Fabrics 16 1 7 10.1177/1558925021989179 Search in Google Scholar

Xing, W., Deng, N., Xin, B., Chen, Y., Zhang, Z. (2019). Investigation of a novel automatic micro image-based method for the recognition of animal fibers based on wavelet and Markov random field. Micron, 119, 88–97. Xing W. Deng N. Xin B. Chen Y. Zhang Z. 2019 Investigation of a novel automatic micro image-based method for the recognition of animal fibers based on wavelet and Markov random field Micron 119 88 97 10.1016/j.micron.2019.01.00930703606 Search in Google Scholar

Lu, K., Luo, J., Zhong, Y., Chai, X. (2019). Identification of wool and cashmere SEM images based on SURF features. Journal of Engineered Fibers and Fabrics, 14, 1–9. Lu K. Luo J. Zhong Y. Chai X. 2019 Identification of wool and cashmere SEM images based on SURF features Journal of Engineered Fibers and Fabrics 14 1 9 10.1177/1558925019866121 Search in Google Scholar

Gao, F., Lin, J., Liu, H., Lu, S. (2019). A novel VBM framework of fiber recognition based on image segmentation and DCNN. IEEE Transactions on Instrumentation and Measurement, 69(4), 963–973. Gao F. Lin J. Liu H. Lu S. 2019 A novel VBM framework of fiber recognition based on image segmentation and DCNN IEEE Transactions on Instrumentation and Measurement 69 4 963 973 10.1109/TIM.2019.2912238 Search in Google Scholar

Luo, J., Lu, K., Chen, Y., Zhang, B. (2021). Automatic identification of cashmere and wool fibers based on microscopic visual features and residual network model. Micron, 143(103023), 1–7. Luo J. Lu K. Chen Y. Zhang B. 2021 Automatic identification of cashmere and wool fibers based on microscopic visual features and residual network model Micron 143 103023 1 7 10.1016/j.micron.2021.10302333540231 Search in Google Scholar

Xing, W., Liu, Y., Xin, B., Zang, L., Deng, N. (2022). The application of deep and transfer learning for identifying cashmere and wool fibers. Journal of Natural Fibers, 19(1), 88–104. Xing W. Liu Y. Xin B. Zang L. Deng N. 2022 The application of deep and transfer learning for identifying cashmere and wool fibers Journal of Natural Fibers 19 1 88 104 10.1080/15440478.2020.1727817 Search in Google Scholar

Mishra, S., Deepthi, V. (2021). Brain image classification by the combination of different wavelet transforms and support vector machine classification. Journal of Ambient Intelligence and Humanized Computing, 12(6), 6741–6749. Mishra S. Deepthi V. 2021 Brain image classification by the combination of different wavelet transforms and support vector machine classification Journal of Ambient Intelligence and Humanized Computing 12 6 6741 6749 10.1007/s12652-020-02299-y Search in Google Scholar

Dou, J., Qin, Q., Tu, Z. (2019). Image fusion based on wavelet transform with genetic algorithms and human visual system. Multimedia Tools and Applications, 78(9), 12491–12517. Dou J. Qin Q. Tu Z. 2019 Image fusion based on wavelet transform with genetic algorithms and human visual system Multimedia Tools and Applications 78 9 12491 12517 10.1007/s11042-018-6756-0 Search in Google Scholar

Otsu, N. (1979). A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9(1), 62–66. Otsu N. 1979 A threshold selection method from gray-level histograms IEEE Transactions on Systems, Man, and Cybernetics 9 1 62 66 10.1109/TSMC.1979.4310076 Search in Google Scholar

Deighan, A., Watts, D. (1997). Ground-roll suppression using the wavelet transform. Geophysics, 62(6), 1896–1903. Deighan A. Watts D. 1997 Ground-roll suppression using the wavelet transform Geophysics 62 6 1896 1903 10.1190/1.1444290 Search in Google Scholar

Stankovic, R., Falkowski, B. (2003) The Haar wavelet transform: its status and achievements. Computers & Electrical Engineering, 29(1), 25–44. Stankovic R. Falkowski B. 2003 The Haar wavelet transform: its status and achievements Computers & Electrical Engineering 29 1 25 44 10.1016/S0045-7906(01)00011-8 Search in Google Scholar

Tamura, H., Mori, S., Yamawaki, T. (1978). Textural features corresponding to visual perception. IEEE Transactions on Systems, Man, and Cybernetics, 8(6), 460–473. Tamura H. Mori S. Yamawaki T. 1978 Textural features corresponding to visual perception IEEE Transactions on Systems, Man, and Cybernetics 8 6 460 473 10.1109/TSMC.1978.4309999 Search in Google Scholar

Liao, Y., Vemuri, V. (2002). Use of k-nearest neighbor classifier for intrusion detection. Computers & Security, 21(5), 439–448. Liao Y. Vemuri V. 2002 Use of k-nearest neighbor classifier for intrusion detection Computers & Security 21 5 439 448 10.1016/S0167-4048(02)00514-X Search in Google Scholar

Cherkassky, V., Ma, Y. (2004). Practical selection of SVM parameters and noise estimation for SVM regression. Neural Networks, 17(1), 113–126. Cherkassky V. Ma Y. 2004 Practical selection of SVM parameters and noise estimation for SVM regression Neural Networks 17 1 113 126 10.1016/S0893-6080(03)00169-214690712 Search in Google Scholar

Liu, X., Zhang, L., Li, M., Zhang, H., Wang, D. (2005). Boosting image classification with LDA-based feature combination for digital photograph management. Pattern Recognition, 38(6), 887–901. Liu X. Zhang L. Li M. Zhang H. Wang D. 2005 Boosting image classification with LDA-based feature combination for digital photograph management Pattern Recognition 38 6 887 901 10.1016/j.patcog.2004.11.008 Search in Google Scholar

Empfohlene Artikel von Trend MD

Planen Sie Ihre Fernkonferenz mit Scienceendo