Deep Learning Models for Biometric Recognition based on Face, Finger vein, Fingerprint, and Iris: A Survey
Kategoria artykułu: Article
Data publikacji: 15 cze 2024
Zakres stron: 117 - 157
Otrzymano: 23 maj 2024
Przyjęty: 07 cze 2024
DOI: https://doi.org/10.2478/jsiot-2024-0007
Słowa kluczowe
© 2023 Saif Mohanad Kadhim et al., published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
Figure 1:

Figure 2:

Figure 3:
![Example of a general biometric system [8]](https://sciendo-parsed.s3.eu-central-1.amazonaws.com/6796ae9b082aa65dea3da8d4/j_jsiot-2024-0007_fig_003.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA6AP2G7AKOUXAVR44%2F20250910%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Date=20250910T210225Z&X-Amz-Expires=3600&X-Amz-Signature=864e03ca3659d95b4a04769e0c5d5a1cf37615f2916785a5599ca6a30759df41&X-Amz-SignedHeaders=host&x-amz-checksum-mode=ENABLED&x-id=GetObject)
IRIS- BASED DEEP LEARNING MODEL USING CASIA-IRIS-THOUSAND DATASET
Method | Year | Architecture | Accuracy | EER |
---|---|---|---|---|
Liu, N., et al. [ |
2016 | DCNN | - | 0.15 |
Nguyen, K., et al. [ |
2017 | DCNN | 98.8% | - |
Alaslani, M.G. [ |
2018 | Alex-Net Model + SVM | 96.6% | - |
Lee, Y.W., et al. [ |
2019 | Deep ResNet | - | 1.3331 |
Liu, Ming, et al. [ |
2019 | DCNN | 83.1% | 0.16 |
Chen, Y., et al. [ |
2021 | DCNN | 99.14% | - |
Alinia Lat, Reihan, et al. [ |
2022 | DCNN | 99.84% | 1.87 |
FACE- BASED DEEP LEARNING RESULTS USING LFW DATASET
Method | Year | Architecture | Accuracy | EER |
---|---|---|---|---|
Tian L. et. al [ |
2016 | Multiple Scales Combined DL | 93.16% | - |
Xiong C. et al [ |
2016 | Deep Mixture Model (DMM), and Convolutional Fusion Network (CFN) | 87.50 % | 1.57 |
Al-Waisy, A. S., et al. [ |
2017 | Deep Belief Network DBN | 98.83% | 0.012 |
Zhuang, Ni, et al [ |
2018 | deep transfer NN | 84.34% | - |
Santoso K, et al. [ |
2018 | DL network using Triple loss | 95.5 | - |
Li, Y., et al. [ |
2018 | DCNN | 97.2% | - |
Luo, D, et al. [ |
2018 | deep cascaded detection method | 99.43% | 0.16 |
Kong, J, et al. [ |
2018 | novel DLN | 95.84% | - |
Iqbal, M, et al. [ |
2019 | DCNN | 99.77% | - |
Khan, M Z., et al. [ |
2019 | DCNN | 97.9% | - |
Elmahmudi, A., et al. [ |
2019 | CNN + pre-trained VGG | 99% | - |
Wang, P., et al. [ |
2019 | deep class-skewed learning method | 99.9% | - |
Bendjillali, R., et al. [ |
2019 | DCNN | 98.13% | - |
Goel, T., et, al. [ |
2020 | Deep Convolutional-Optimized Kernel Extreme Learning Machine (DC-OKELM | 99.2% | 0.04 |
Zhang, J., et al. [ |
2022 | Lightened CNN | 99.9% | - |
PERFORMANCE RESULTS OF THE BEST FINGER VEIN-BASED DEEP LEARNING MODELS
Method | Year | Dataset | Architecture | Accuracy | EER |
---|---|---|---|---|---|
Nguyen, Dat Tien, et al [ |
2017 | - | CNN + SVM | - | 0.00 |
Chen, Cheng, et al. [ |
2017 | Collected | DBN + CNN | 99.6% | - |
Fang, Y. et al. [ |
2018 | MMCBNU | DCNN | - | 0.10 |
Wang, Jun, et al. [ |
2018 | PolyU | CNN + SVM | - | 0.068 |
Das, Rig, et al. [ |
2018 | UTFVP | CNN | 98.33% | - |
Xie, C., et al. [ |
2019 | - | CNN + Supervised Discrete Hashing | - | 0.093 |
Li, J., et al [ |
2019 | SDUMLA | Graph Neural Network (GNN) | 99.98% | - |
Zhang, J., et al. [ |
2019 | SDUMLA | Fully Convolutional GAN + CNN | 99.15% | 0.87 |
Hou, B., et al. [ |
2019 | FV-USM | Convolutional Auto-Encoder (CAE) + SVM | 99.95 % | 0.12 |
Kamaruddin, N.M., et al. [ |
2019 | FV-USM | PCANET | 100% | - |
Yang, W., et al. [ |
2019 | MMCBNU | Proposed DL (multilayer extreme learning machine + binary decision diagram (BDD)) | 98.70% | - |
Zhao, D., et al. [ |
2020 | MMCBNU | DCNN | 99%.05 | 0.503 |
Kuzu, R.S. [ |
2020 | SDUMLA | DCNN + Autoencoder, | 99.99% | 0.009 |
Kuzu, R., et al. [ |
2020 | Collected | CNN + LSTM | 99.13%. | - |
Boucherit, I., et al. [ |
2020 | THU-FVFDT2 | DCNN | 99.56%. | - |
Zhao, Jia-Yi, et al. [ |
2020 | FV-USM | DCNN | 98% | - |
Noh, K. J., et al. [ |
2020 | HKPolyU | DCNN | - | 0.05 |
Zeng, J., et al. [ |
2020 | MMCBNU | RNN + Conditional Random Field (CRF) | - | 0.36 |
Bilal, A., et al. [ |
2021 | SDUMLA | DCNN | 99.84% | - |
Shen, J, et al. [ |
2021 | PKU-FVD | DCNN | 99.6% | 0.67 |
Wang, K., et, al. [ |
2021 | FV-USM | Multi-Receptive Field Bilinear CNN | 100% | - |
Hou, B [ |
2021 | FV-USM | DCNN | 99.79% | 0.25 |
Huang, J., et al. [ |
2021 | MMCBNU | Joint Attention Finger Vein Network | - | 0.08 |
Huang, Z., et al. [ |
2021 | SDUMLA | DCNN | 99.53% | - |
Shaheed, K., et al. [ |
2022 | SDUMLA | DCNN | 99% | - |
Muthusamy, D. [ |
2022 | SDUMLA | Deep Perceptive Fuzzy NN (DPFNN) | 98% | - |
Hou, B., et al. [ |
2022 | FV-USM | Triplet-Classifier GAN | 99.66% | 0.03 |
IRIS- BASED DEEP LEARNING MODEL USING IITD DATASET RESULTS
Method | Year | Architecture | Accuracy | EER |
---|---|---|---|---|
Al-Waisy, Alaa S., et al. [ |
2018 | DCNN + softmax | 100% | - |
Alaslani, M.G. [ |
2018 | Alex-Net + SVM | 98.3% | - |
Chen, Ying, et al. [ |
2019 | DCNN + softmax | 98.1% | - |
Liu, Ming, et al. [ |
2019 | DCNN | 86.8% | - |
Chen, Y., et al. [ |
2020 | DCNN | 99.3% | 0.74 |
Chen, Y., et al. [ |
2021 | DCNN | 97.24% | 0.18 |
Chen, Ying, et al. [ |
2021 | DenseSENet | 99.06% | 0.945 |
Alinia Lat, Reihan, et al. [ |
2022 | DCNN | 99.99% | 0.45 |
IRIS- BASED DEEP LEARNING MODEL USING MULTIPLE KINDS OF IRIS DATASETS
Dataset | Method | Architecture | Accuracy | EER |
---|---|---|---|---|
CASIA-V4 | He, Fei, et al. [ |
Gabor + DBN | 99.998% | - |
Wang, Zi, et al. [ |
Convolutional and Residual network | 99.08% | - | |
Zhang, Wei, et al. [ |
Fully Dilated U-Net (FD-UNet) | 97.36% | - | |
Azam, M.S., et al. [ |
DCNN + SVM | 96.3% | - | |
Chen, Y., et al. [ |
DCNN | 97.35% | 1.05 | |
UBIRIS | Proença, H. et al. [ |
DCNN | 99.8% | 0.019 |
Wang, Zi, et al. [ |
Convolutional and Residual network | 96.12% | - | |
Zhang, Wei, et al. [ |
Fully Dilated U-Net (FD-UNet) | 94.81% | - | |
Shirke, S.D., et al. [ |
DBN | 97.9% | - | |
ND | Nguyen, Kien, et, al. [ |
Pre-trained CNNs | 98.7% | - |
Zhang, Wei, et al [ |
Fully Dilated U-Net (FD-UNet) | 96.74% | - |
BIOMETRIC-BASED SYSTEMS REQUIREMENTS
All authorized individuals must have the utilized biometric trait | |
No two authorized individuals have similar characteristics of the trait | |
The obtained trait doesn't change for a specific duration of time | |
Identified in the achieved Security, speed, accuracy, and robustness | |
Agreed by the individual's population without an interception | |
The degree ability of to use a fake biometric | |
The simplicity of gathering traits samples in a comfortable manner for the individual |
BIOMETRICS FEATURES AND APPLICATIONS
Biometric trait | Significant Features | Applications |
---|---|---|
Face |
No need for physical friction Easy in keeping templet. Comfortable, statistics less complicated Rapid identification procedure Changes depending on time, age, incidental events, Differences between twins are difficult. Affected by lighting in the surrounding environment. May be partially occluded by other objects |
Access control Face ID Interaction within computer Criminal determination Monitoring Smart cards |
Fingerprint |
Modern, reliable, safe, highly accurate and less cost Rapid matching Need small memory space. Affected by wound, dust, twists. Need a physical communication |
Authentication of the driver Criminals' determination and forensics Authentication in both license and visa cards Access control |
Iris |
Scalable, accurate and highly covered Samples of small size Rapid processing and maximum cost Have unparalleled structure. Remains stable throughout the life Difficult to adjust. High randomness No physical contact is needed and just user collaboration. Hidden by some eye parts such as lashes. Affected by some illness conditions |
Criminals' determination, and forensics Identification Access control National security determining in all of seaports, land, and airports |
Finger vein |
Sanitary without any touch Highly accurate and hard to spoof. Unique Affected by body temperature. Affected by some diseases. Tiny size of template Minimum processing |
Driver identification Door's security login Bank services Physical access monitoring and attendance time Airports, hospitals, schools |
FACE- BASED DEEP LEARNING RESULTS USING Yale and Yale FACE B DATASET
Method | Year | Architecture | Accuracy | EER |
---|---|---|---|---|
Tripathi, B. K. [ |
2017 | One-Class-in-One-Neuron (OCON) DL | 97.4 % | - |
Kong, J, et, al. [ |
2018 | Novel DLN | 100% | - |
Görgel, P., et al. [ |
2019 | Deep Stacked De-Noising Sparse Auto encoders (DS-DSA) | 98.16% | - |
Li, Y. K., et al. [ |
2019 | DL network L1-2D2PCANet | 96.86% | 0.77 |
Goel, T., et, al. [ |
2020 | Deep Convolutional-Optimized Kernel Extreme Learning Machine (DC-OKELM) | - | 6.67 |
PERFORMANCE RESULTS OF THE BEST FINGERPRINT- BASED DEEP LEARNING MODELS
Method | Year | Dataset | Architecture | Accuracy | EER |
---|---|---|---|---|---|
Kim, S., et al. [ |
2016 | Collected | DBN | 99.4% | - |
Jeon, W. S. et al. [ |
2017 | FVC | DCNN | 97.2% | - |
Wang, Z., et al. [ |
2017 | NIST | Novel approach (D-LVQ) | 99.075% | - |
Peralta, D., et al. [ |
2018 | Collected | DCNN | 99.6% | - |
Yu, Y., et al. [ |
2018 | Collected | DCNN | 96.46% | - |
Lin, C., et al. [ |
2018 | - | DCNN | 99.89% | 0.64 |
Jung, H. Y., et al [ |
2018 | - | DCNN | 98.6% | - |
Yuan, C, et al [ |
2019 | LivDet 2013 | Deep Residual Network (DRN) | 97.04% | - |
Haider, Amir, et al. [ |
2019 | Collected | DCNN | 95.94% | - |
Song, D., et al. [ |
2019 | Collected | 1-D CNN | - | 0.06 |
Uliyan, D.M., et al. [ |
2020 | LivDet 2013 | Deep Boltzmann Machines along with KNN | 96% | - |
Liu, Feng, et al. [ |
2020 | - | DeepPoreID | - | 0.16 |
Yang, X., et al. [ |
2020 | Collected | DCNN | 97.1% | - |
Arora, S., et al. [ |
2020 | DigitalPersona 2015 | DCNN | 99.80% | - |
Zhang, Z., et al. [ |
2021 | - | DCNN | 98.24% | - |
Ahsan, M., et al. [ |
2021 | Collected | Gabor filtering and DCNN+ PCA | 99.87% | 4.28 |
Leghari, M., et, al. [ |
2021 | Collected | DCNN | 99.87% | - |
Li, H. [ |
2021 | NIST | DCNN | 98.65% | - |
Lee, Samuel, et al [ |
2021 | NIST | Proposed Pix2Pix DL model | 100% | - |
Nahar, P., et al. [ |
2021 | - | DCNN | 99.1% | - |
Ibrahim, A.M., et al. [ |
2021 | - | DCNN | 99.22% | - |
Gustisyaf, A.I., et al. [ |
2021 | Collected | DCNN | 99.9667% | - |
Yuan, C., Yu, et al. [ |
2022 | - | DCNN | - | 0.3 |
Saeed, F., et, al [ |
0 | FVC | DCNN | 98. | - |
2 | 89% | ||||
2 |
The ADVANTAGES AND DISADVANTGES OF THE MOST WIDLY USEED DEP LEARNING ARCHITECTURES
Architecture | Advantages | Disadvantages |
---|---|---|
CNN |
Unsupervised feature learning Low complexity due to count of parameters and sharing of weights. High performance in recognition and classification of images |
Large dataset required. Long training time Unable to deal with input variations (i.e., orientation, position, environment) |
RNN |
Can remember and learn from past data, to give better prediction. The ability to capture long sequences patterns in the data of large size. Often utilized for natural language processing tasks |
computationally expensive more porn to overfitting and vanishing gradient problems. hard to optimize due to the large count of layers and parameters. |
LSTM |
Better attitude in dealing with long-term dependencies. Utilized LSTM cell as activation function so it's less susceptible to the vanishing gradient problem. Very effective at modeling complex sequential data. |
More complicated than RNNs require more training data in order to learn effectively. Not suited for prediction or classification tasks. Slow on large datasets training. Not work effectively for all kinds of data such as nonlinear or noisy ones. |
GRU |
Uses less memory and is faster than LSTM. Has fewer parameters than LSTM |
low learning efficiency, due to the slow convergence rate too long training time may suffer from under-fitting problem |
AE |
Unsupervised and doesn't need labeled data for training. Convert the high dimension data into low dimension features. High scalability with the increase of data. minimize the noise of entered data |
high complexity computationally expensive, need large training dataset. causes losses in interpretability, when representing features in a latent space |
DBN |
Unsupervised feature learning robust in classification (size, position, color, view angle – rotation). implemented in many kinds of dataset. resistant to overfitting due to the RBMs' contribution to model regularization. Can manage missing data |
high complexity computationally expensive need large training dataset. |
GAN |
Can deal with partially labelled data. Efficient generation of samples which looks like the original one. used in generating images and videos. |
Hard to be trained due to the need for different data types in a continuous manner. training cannot be completed when having missing pattern. have difficulties in dealing with discrete data (e.g., text) |