Accesso libero

Multi-algorithmic Palmprint Authentication System Based on Score Level Fusion

INFORMAZIONI SU QUESTO ARTICOLO

Cita

Biometric is a technique used to identify the individuals with respect to their behavioural and physiological characteristics of human being to provide highly secure identification and personal authentication solutions. Xiaoxing et al. (2015) and Wang and Liu (2014) implemented biometric algorithms for iris and face recognition to recognize the human being personality that are unique to each human. Palmprint is one of the powerful biometric which contains rich texture information and stable line features. Palmprint is the most compromising modality which has been used in many applications such as industries, automatic telling machines, banking, marketing, finance, and airports. Different algorithms used to extract palmprint features are line, subspace, and texture-based approaches. Palmprint images are captured using various types of scanner devices such as charge coupled device (CCD) scanner, digital scanner, and digital /video camera. Zhang et al. (2003) used the CCD camera device to capture low resolution palmprint images for online palmprint identification system. The two-dimensional (2D) Gabor phase coding scheme is employed to extract the texture features from low resolution palmprint images with few coefficients. Finally, a normalized Hamming distance is used for matching measurement. The results tested on palmprint database of 7,752 palmprint images from 192 individuals achieved 98% of genuine acceptance rate (GAR) for 0.04% false acceptance rate (FAR).

Wu et al. (2004) used the directional line detectors to extract the principal lines for automatic classification of low resolution palmprint images. The extracted principal lines of palmprint mages are classified into six categories with an accuracy of 96.03%. Connie et al. (2005) proposed palmprint recognition system using three subspace projection techniques namely, principal component analysis (PCA), Fisher discriminant analysis (FDA) and independent component analysis (ICA). Results demonstrate that the application of FDA on wavelet subbands yields both false rejection ratio (FRR) and FAR as low as 1.492% and 1.356%. Gan et al. (2006) utilized an effective feature extraction algorithm to extract the edge feature of palmprint image by multiscale wavelet subimage analysis. Experimental results conducted on PolyU palmprint database for the wavelet decomposition level six achieved recognition rate of 94.96%.

Chen et al. (2006) applied Dual-Tree Complex Wavelet Transform (DT-CWT) to extract the palmprint features at the decomposition level four. The 2D Fourier transform is performed on the wavelet subbands to obtain the spectrum magnitude. The extracted features classified using SVM achieved the recognition rate of 97%. Arivazhagan et al. (2006) presented a new approach of rotation invariant Palmprint texture classification using Gabor wavelets. Texture features extracted using Gabor wavelets decompose the palmprint image into multiple scales and multiple orientations. The highest mean success rate of 93.75% was obtained for feature vector (F7) at wavelet decomposition of 5th scale and 10th orientation with 100 features.

Ardabili et al. (2011) presented a new method for palmprint verification using contourlet transform and Adaboost classifier. Contourlet features extracted from palmprint images are matched using Adaboost classifier. The results tested on the PolyU palmprint database obtained FAR of 0.21347% and FRR of 5.833% for ten iterations. Gawande et al. (2010) used multiple algorithms for iris recognition system at score level fusion method. The matching scores obtained from three algorithms were normalized and used rule-based fusion methods like average, minimum, and maximum to decide the genuine or imposter user. Vaidehi et al. (2012) used multiple algorithms and score level fusion method to enhance the performance of the face authentication system. The dynamically weight updating strategy makes the system adaptive to variations in the image capture. Multi-algorithmic palmprint identification system (Badrinath and Gupta, 2007) at match score level using dynamic weighted sum-rule obtained GAR of 94.80%.

The rest of the paper has been organized as follows. Section “Multi-Algorithmic Palmprint Verification System describes about the palmprint image acquisition and image pre-processing process. The feature extraction using DT-CWT and contourlet transform with PCA are discussed in section “Feature Extraction”. The pattern matching using sum-rule, weighted sum-rule and SVM score level fusion schemes are illustrated in section “Pattern Matching - Score Level Fusion”. Finally, the experimental results and conclusion with future enhancements are provided.

Multi-algorithmic palmprint verification system

Palmprint is one of the most efficient physiological biometrics which contains rich texture information, unique and stable features. Palmprint offers many unique features such as principal lines, wrinkles, ridges, minutiae points, singular points, and texture for personal recognition with high inference. The multi-algorithmic approach for palmprint recognition system shown in Figure 1 uses two different feature extraction algorithms namely, 2D DT-CWT and contourlet transform with PCA to address the problem of illumination and pose variations. The matching scores obtained from the Euclidean distance classifiers were fused using SVM score level fusion scheme to decide whether the user is genuine or imposter. The main stages of palmprint recognition system include image acquisition, pre-processing, feature extraction, and classification.

Image acquisition

Automatic palmprint recognition systems are classified into two categories: online and offline. An online system acquires palmprint images using a digital camera or a special capture device that is directly connected to a computer for real-time processing. In offline, palmprint images are obtained using digital scanner which can extract the ridges, singular points, and minutiae point features (Wong et al., 2005). The palmprint images are captured using various devices such as web camera, digital camera, mobile phones, and digital scanners.

The biometric research centre at the Hong Kong Polytechnic University has developed multispectral palmprint image acquisition device to acquire high quality palmprint images of small sized 384 × 284 image pixels at reasonable cost for the recognition system. Figure 2 shows the sample low resolution and high resolution palmprint images.

Fig. 1

Block diagram of multi-algorithmic palmprint authentication system.

Fig. 2

Sample palmprint images.

Image pre-processing

Pre processing is used to segment the region of interest (ROI) and to extract only the required features, thus making it necessary to obtain a subimage from the captured palmprint image. The palmprint images are orientated and normalized before feature extraction to eliminate the variations caused by rotation and translation. Segmentation is a major task in palmprint recognition which aids in enhancing the quality of the palmprint images. ROI provides more information suitable for efficient recognition. Two techniques used in the process are square-based segmentation and circle-based segmentation (Zhang, 2004).

Square-based segmentation

The palmprint images are oriented and translated before the segmentation process. The steps followed to acquire the square-based segmentation (Wang et al., 2007) are given below:

Step 1 Median filter is used to remove noise in the palmprint image.

Step 2 Thresholding is done to convert gray scale image I (i, j) into the binary image. The symbol ‘α’ is the threshold value set as 100. I(i,j)={0,α<0,1,α0.

Step 3 Contour of the palmprint image is obtained using the algorithm of contour tracing. The two valley points C1, C2 are located as shown in Figure 3(a).

Step 4 The two valley points are joined using the straight line Eq. (2) yy1=(y2y1x2x1)(xx1).

Step 5 The points on the contour of palmprint at an angle of 45° and 55° are plotted with the reference straight line L1 and L2. Figures 3(b) and (c) illustrate the reference points and the angles made on the reference line. ROI is used to segment the centre part of the palmprint image as shown in Figure 3(d).

The angle can be extended from the reference line up to ±40°. As the valley point is different for every person, this method is highly stable.

Fig. 3

Square-based segmented palmprint image.

Circle-based segmentation

The circle-based segmentation technique is the most efficient technique used to extract the central part of the palmprint image. This technique can also be used independently without feature extraction and classification. The gray image is converted to binary image using Eq. (1). The two valley points C1, C2 are obtained using the contour tracing algorithm as shown in Figure 4 (a). C1, C2 as shown in Figure 4 (b) is joined using the straight line Eq. (2).

The steps for the circle-based segmentation are given below:

Step 1 The radius of the circle ‘r’ from the midpoint value of the straight line is calculated using Eq. (3). x1, x2, y1, and y2 are the coordinate points along the X and Y axis, respectively: r=(x2x1)2+(y2y1)2.

Step 2 Using the ‘strel’ command, the neighborhood region of the centre point is obtained and image is cropped as shown in Figure 4(c).

It is a very stable phenomenon as the valley points of palmprint differs from person to person. Simultaneously, the radius of the circle also varies for each person. A test database is created using the radius of the circle and the test image of palmprint is checked with the database created. If the radius of the palmprint image matches with the image in the database, then the palmprint belonging to a particular person can be identified easily. FRR and FAR are highly reduced to <0.001%.

Fig. 4

Circle-based segmented palmprint image.

Feature extraction

The feature extraction is the process of obtaining the feature sets from the segmented palmprint image. The 2D DT-CWT has been introduced for palmprint recognition system to extract the most discriminative features and also to reduce the feature vector size. The new method of combining the contourlet wavelet transform with PCA is used to produce fast and accurate results in the palmprint recognition system.

Contourlet transform has been suggested for the palmprint recognition for the following reasons:

To provide the biometric systems with transformations and deal with low resolution images to identify the individuals.

To obtain multiscale and multidirectional decomposition of the image.

To extract the contourlet coefficients of low frequency and high frequency in different scales and various angles.

DT-CWT provides both approximate shift invariance and good directional selectivity. It is employed for multi-algorithmic palmprint recognition system to represent the palmprint image with better preserving the discriminable features with less redundant and computationally efficient.

Feature extraction using DT-CWT

DT-CWT is a form of Discrete Wavelet Transform (DWT),which generates complex coefficients by using a dual-tree of wavelet filters to obtain their real and imaginary parts. It is a pyramid-like structure having a Gabor-like frequency response with six multiscale directional subbands (±15°, ±45°, ±75°). It provides two main advantages over the real DWT image representation namely, approximate shift invariance and improved directional selectivity, which renders DT-CWT nearly rotation invariant.

In dual-tree, two real wavelet trees are used with each capable of perfect reconstruction. One tree generates the real part of the transform and the other is used in generating complex part. {H0(z), H1(z)} is a quadrature mirror filter (QMF) pair in the real-coefficient analysis branch. For the complex part {G0(z), G1(z)} is another QMF pair in the analysis branch (Selesnick et al., 2005). All filter pairs are orthogonal and real valued. This realization laid the foundation of utilizing to further enhance and improve the recognition rates already achieved by other methods. The extracted feature vectors are given as input to a classifier. The DT-CWT can be used for a variety of applications such as edge detection, image restoration, enhancement, and image compression.

The 1D dual-tree wavelet transform is implemented using a pair of filter banks operating on the same data simultaneously. The upper iterated filter bank represents the real part of a complex wavelet transform. The lower one represents the imaginary part. The orthogonal two channel filter bank with analysis low-pass filter H0(z),analysis highpass filter H1(z) and synthesis filters G0(z) = H0(z − 1), G1(z) = H1(z − 1) are shown in Figure 5.

For an input signal X(z), the analysis part of the filter bank inclusive of subsequent upsampling produces the low-pass cofficients C1 and the high-pass coefficients D1 (Zhang et al., 2007): C1(z2)=12{X(z)H0(z)+X(z)H0(z)}, D1(z2)=12{X(z)H1(z)+X(z)H1(z)}.

The decomposed low frequency Xl1(z) and a high frequency Xh1(z) parts are described in Eqs. (7) and (8). X(z)=Xl1(z)+Xh1(z), Xl1(z)=C1(z2)G0(z)12{X(z)H0(z)G0(z)+X(z)H0(z)G0(z)}, Xh1(z)=D1(z2)G1(z)=12{X(z)H1(z)G1(z)+X(z)H1(z)G1(z).

However, this decomposition is not shift invariant as X(z) are aliasing terms introduced by the down sampling and up sampling operations. The 2D DT-CWT images are computed by applying the pair of trees to the rows and columns of the images as in the basic DWT. The real DT-CWT of an image x(n) is implemented using two critically sampled separable 2D DWT in parallel. For each pair of subbands, the sum and differences are obtained as shown in Figure 6. Each level of the tree produces six band-pass subimages as well as two low-pass subimages, on which subsequent stages iterate. In the case of real 2D filter banks the three high-pass subbands have orientations of 0°, 45°, and 90° compared to the complex filters which have six high-pass subbands at each level which are oriented at ±15°, ±45°, ±75°. The shift invariance and directionality of the DT-CWT is applied to the palmprint image to extract the optimal features as shown in Figure 7.

Fig. 5

One level signal wavelet decomposition and reconstruction.

Fig. 6

2D DT-CWT decomposition.

Fig. 7

Palmprint features extracted using DT-CWT.

Feature extraction using contourlet transform and PCA

The 2D DWT are good at detecting point discontinuities at edge points but are not effective in representing the geometrical smoothness of the contours. Contourlet transform used for feature extraction provides a flexible multi-resolution and directionality to capture smooth contours in the palmprint images. LP combined with Directional Filter Bank (DFB) decomposing the palmprint image into directional subbands at multiple scales is shown in Figure 8. LP decomposition generates down sampled low-pass version of the original image and would obtain the band pass of image by taking the difference between original and prediction values (Burt and Adelson, 1983). Double filter bank is constructed using two blocks, Quincunx filter banks with fan filters and shearing operator. The first block divides into vertical and horizontal directions, and the second block reorders the image samples in the binary level of tree decomposed structure to obtain the desired 2D spectrum division (Do and Vetterli, 2005).

The band-pass image from the LP is fed into DFB to extract 2D directional information of an image (Bamberger and Smith, 1992). The main reason for combining the DFB with LP is to overcome the lack of low frequency in several directional subbands. Figure 9 depicts the contourlet features of palmprint image without preprocessing at pyramid level four decomposition.

Contourlet transform provides 2n subbands at multiscales. The pyramidal level three and four decomposition of palmprint image produces eight and sixteen subbands. Figure 10 illustrates the contourlet features of palmprint image with preprocessing at pyramid three and four level decompositions.

The contourlet features for classification are obtained by calculating energy for each subband and selecting the subset of feature sequence from a feature set with high dimension. The statistical features are extracted from each subband to create a feature vector. Energy, contrast, correlation, and homogeneity are the discriminating features calculated by using the various properties of Gray Level Co-occurrence Matrix (GLCM) techniques (Ardabili et al., 2011). The properties are as follows: Energy=i,jp(i,j)2, Contrast=i,j|ij|2p(i,j), Correlation=i,j((iµi)(jµj)p(i,j))/(σiσj), Homogenity=i,jp(i,j)/(1+|ij|). where µ, σ are the mean and standard deviation of co-occurrence matrix. The subbands at each level were used to form a matrix. The unwanted coefficients are removed and the feature vector is obtained using the properties of GLCM.

PCA is an appearance-based statistical method used for feature extraction, data compression, redundancy removal, and prediction. The dimensionality is reduced by projecting the feature vector of the palmprint image into a small set of feature space called eigen palms. The projected weights forming eigenpalm vector can represent the principal components of palmprint image. By the integration of contourlet and PCA, the coefficients generated by contourlet transform at multiscales and multidirections are given as input to PCA. The discriminating features of all the subbands are reduced using PCA. The dimensionality after contourlet transform is reduced from 5178*5178 to 6*15 using PCA.

Fig. 8

Block diagram of the contourlet transform.

Fig. 9

Contourlet features of the palmprint image without preprocessing.

Fig. 10

Contourlet features of the palmprint image with preprocessing.

Pattern matching—score level fusion

The Euclidean classifier is the nearest mean classifier which is commonly used to calculate the similarity between two feature sets. The matching score was computed from the Euclidean distance of the input feature vector and the template feature vector. x, y is the feature vector of length ‘N’. The Euclidean distance between the two feature vectors are calculated as d(x,y)=xy2=i=1k(xiyi)2.

Sum-rule score level fusion

In sum-rule based score level fusion scheme (Jain et al., 2005), the fused score ‘S’ is computed using s=s1+s2, where S1 and S2 are the matching scores obtained from matching modules 1 and 2 of multi-algorithm palmprint authentication system after score normalization.

Weighted sum-rule score level fusion

In weighted sum-rule score level fusion (Cui and Yang, 2011), the fused score ‘S’ is computed using s=w1s1+w2s2, where s1 and s2 are the matching scores after score normalization, w1 and w2 are the weights. The weights w2 and w2 can be found using an empirical weighting scheme. It can be varied from 0 to 1 in steps of 0.02 such that w1 + w2 = 1. The fused score compared to a decision threshold decide a person to be genuine or an impostor.

SVM score level fusion

Support Vector Machine (SVM) is based on the principle of structural risk minimization which results in fusing the matching scores obtained from the classifiers after score normalization. It yields satisfying result by deploying a nonlinear optimal separating hyper-plane which could separate the datasets more efficiently.

The training data xi = xi1, xi2, …, xiN correspond to input vector of the normalized matching scores, where i=1,…, N is the number of matching scores used for training and the class yiϵ(1, -1) (Wang and Han, 2009). In the testing phase, the fused score Fs of test input pattern is calculated using Fs=wxt+b, where w is a weight vector, xt is the test pattern of the matching scores obtained from classifiers and b, the bias estimated on the test set. Finally, the test pattern xt belongs to a genuine class if the value of yi is set to ‘1’ or an impostor class if the value of yi is set to −1 (Vatsa et al., 2008). The SVM fusion rule enhanced the recognition performance and reduced the error rates compared to the sum rule for the proposed multi-algorithmic palmprint authentication system.

Results and discussion

The performance evaluation of multi-algorithmic palmprint authentication system based on score level fusion schemes is analyzed in this sub-section. The experimental simulation has been carried out in MATLAB. The proposed technique has been tested on real biometric samples collected from a laboratory consisting of 500 palmprint images of 250 subjects. The palmprint images are captured using Logitech web camera. The dataset consists of 640 × 480 pixel palmprint images from the 250 users.

Performance evaluation of multi-algorithmic palmprint verification system

Multi-algorithmic palmprint score level fusion schemes performance was evaluated by plotting the ROC curves in terms of GAR and FAR as shown in Figure 11. GAR is equivalent to (1-FRR). For a FAR of 0.1%, the GAR of SVM fusion method is 98.00%, the GAR of weighted sum-rule method is 96.75% and the GAR of sum-rule fusion method is 96.00%. Among all of these fusion approaches, SVM score level fusion method has given the best GAR of 98.00%.

Equal error rate (EER) is the point in a ROC curve where FAR and FRR assume the same value. ROC curves of SVM, weighted sum-rule and sum-rule score level fusion schemes in terms of FRR and FAR were plotted and shown in Figure 12. From the ROC curve, EER of 1.5% was obtained for the SVM fusion method. For weighted sum-rule and sum-rule match score level fusion approach, EER obtained was 2.20% and 2.50%, respectively. The experimental results demonstrate that SVM match score level fusion outperforms sum-rule and weighted sum-rule method.

The execution times for single algorithmic and multi-algorithmic approaches for palmprint recognition are shown in Tables 13. It can be observed that multi-algorithmic approach takes longer time than single algorithmic approaches.

Execution time calculation for contourlet transform based palmprint recognition system (1 sample of each user).

Process type Execution time (ms)
Preprocessing 25
Feature extraction 275
Pattern matching 50
Total execution time 350

Execution time calculation for DT-CWT based palmprint recognition system (1 sample of each user).

Process type Execution time (ms)
Preprocessing 25
Feature extraction 250
Pattern matching 50
Total execution time 325

Execution time calculation for multi-algorithmic palmprint recognition system (1 sample of each user).

Process type Execution time (ms)
Preprocessing 25
Feature extraction 350
Pattern matching 50
Score normalization 100
Fusion at score level 250
Total execution time 775

Fig. 11

ROC curves of multi-algorithmic palmprint score level fusion in terms of GAR and FAR.

Fig. 12

ROC curves of multi-algorithmic palmprint score level fusion in terms of FRR and FAR.

Conclusion

The multi-algorithmic approach for palmprint verification makes the system adaptive to variations in the image capture and overcome the problems of pose, illumination, and orientation invariant. The palmprint features are extracted using two feature extraction algorithms namely contourlet transform with PCA and DT-CWT. The outputs from the multiple algorithms are normalized using z-score and their scores are fused to decide whether the user is genuine or imposter. The SVM score level fusion for palmprint verification system increased the performance accuracy with lower error rate. If three or four feature extraction algorithms are suggested for each biometric trait, the recognition rate is improved but it consumes more time for recognition as the processing complexity is larger. The system can be further enhanced to include two or more modalities in order to make it spoof-free.

eISSN:
1178-5608
Lingua:
Inglese
Frequenza di pubblicazione:
Volume Open
Argomenti della rivista:
Engineering, Introductions and Overviews, other