Cite

Figure 1

An HGA-FLVQ Model Block Diagram.
An HGA-FLVQ Model Block Diagram.

Figure 2

A Cycle of Gas Sensor (TGS) Response in an E-nose Device.
A Cycle of Gas Sensor (TGS) Response in an E-nose Device.

Figure 3

Error Value Decrease in HGA-FLVQ Model Training using TGS813 Sensor Data.
Error Value Decrease in HGA-FLVQ Model Training using TGS813 Sensor Data.

Figure 4

Error Value Decrease in HGA-FLVQ Model Training using TGS822 Sensor Data.
Error Value Decrease in HGA-FLVQ Model Training using TGS822 Sensor Data.

Figure 5

Error Value Decrease in HGA-FLVQ Model Training using TGS2611 Sensor Data.
Error Value Decrease in HGA-FLVQ Model Training using TGS2611 Sensor Data.

Final confusion matrix of HGA-FLVQ model after a re-examination by ZN staining.

n = 50 Predicted: No Predicted: Yes
ZN Staining: No TN = 22 FP = 1 23
ZN Staining: Yes FN = 1 TP = 26 27
23 27

Confusion matrix of HGA-FLVQ model using ZN staining method.

n = 50 Predicted: No Predicted: Yes
ZN staining: No TN = 22 FP = 4 26
ZN staining: Yes FN = 1 TP = 23 24
23 27

Confusion matrix of FLVQ method using ZN staining as Gold standard.

n = 50 Predicted : No Predicted: Yes
ZN Staining: No TN=25 FP=1 26
ZN Staining: Yes FN=7 TP=17 24
32 18

Results of weight calculation and revision of LVQ training.

TGS813 Sensor TGS822 Sensor TGS2611 Sensor
1st Iteration
The weight of the negative class −0.051673025 0.26946884 −0.003179715
The weight of the positive class −0.002036426 0.1985224 0.21423542
2nd Iteration
The weight of the negative class −0.0356808 0.28372976 −3.8830974E-4
The weight of the positive class 0.028204879 0.14191468 0.16531087
3rd Iteration
The weight of the negative class −0.0029952978 0.3213401 0.00271485
The weight of the positive class 0.052159406 0.10385808 0.13945574
4th Iteration
The weight of the negative class 0.03716613 0.37145796 0.0097912615
The weight of the positive class 0.062625006 0.088705875 0.1361857
5th Iteration
The weight of the negative class 0.07502196 0.407575 0.017326174
The weight of the positive class 0.06537599 0.080875896 0.12186478
6th Iteration
The weight of the negative class 0.09876694 0.42662817 0.02240251
The weight of the positive class 0.06590489 0.07604013 0.1046309
7th Iteration
The weight of the negative class 0.11628124 0.44089812 0.026141549
The weight of the positive class 0.06780671 0.074578255 0.09289357
8th Iteration
The weight of the negative class 0.12786691 0.44939327 0.02856244
The weight of the positive class 0.06953817 0.07365509 0.08497784
9th Iteration
The weight of the negative class 0.13485843 0.45396066 0.029982805
The weight of the positive class 0.07066574 0.07268719 0.08011716
10th Iteration
The weight of the negative class 0.13828559 0.4564701 0.030701837
The weight of the positive class 0.07150341 0.072542615 0.07794089

Performance rates of HGA-FLVQ model.

Performance rate Formula Result
Accuracy (TP+TN)/n (23+22)/50 × 100% = 90.00%
Error rate (FP+FN)/n (4+1)/50 × 100% = 10.00%
Sensitivity (true positive rate) TP/ZN Staining Yes 23/24 × 100% = 95.83%
False positive rate FP/ZN Staining No 4/26 × 100% = 15.38%
Specificity (true negative rate) TN/ZN Staining No 22/26 × 100% = 84.62%
Precision TP/Predictive Yes 23/27 × 100% = 85.19%
Prevalence TP/n 23/50 ×100% = 46.00%

Confusion matrix of HGA-FLVQ model using ZN staining re-examination.

n = 4 Re-Examination ZN Staining: No Re-Examination ZN Staining: Yes
ZN Staining: No TN = 1 FP = 3 4
ZN Staining: Yes FN = 0 TP = 0 0
1 3

Results of cluster center revisions of FLVQ training.

TGS813 Sensor data
Iteration Fuzziness parameter Learning rate Cluster-Center Error
1 1.100590 0.001075 0.001401 0.073105 0.072574 0.004639
2 1.101180 0.001309 0.001160 0.079481 0.068403 0.000058
TGS822 Sensor data
1 1.100590 0.001113 0.001346 0.160899 0.142446 0.004211
2 1.101180 0.001361 0.001012 0.218137 0.103790 0.004771
3 1.101770 0.001516 0.000935 0.236398 0.101749 0.000338
4 1.102360 0.001731 0.000869 0.246578 0.106190 0.000123
5 1.102950 0.001982 0.000817 0.257252 0.110136 0.000129
6 1.103540 0.002266 0.000776 0.268349 0.113576 0.000135
7 1.104130 0.002558 0.000747 0.278854 0.116367 0.000118
8 1.104720 0.002829 0.000726 0.287788 0.118478 0.000084
TGS2611 Sensor data
1 1.100590 0.003575 0.000705 0.040732 0.029254 0.001988
2 1.101180 0.002175 0.000788 0.074589 0.016031 0.001321
3 1.101770 0.002993 0.000715 0.092969 0.017238 0.000339
4 1.102360 0.003588 0.000688 0.101591 0.018471 0.000076

Final confusion matrix of LVQ method after a re-examination by ZN staining.

n = 50 Predicted : No Predicted: Yes
ZN Staining: No TN=22 FP=4 26
ZN Staining: Yes FN=3 TP=21 24
25 25

Final results of HGA-FLVQ model performance.

Performance rates Formulation Performance of HGA-FLVQ model Performance of LVQ method Performance of FLVQ method
Accuracy (TP+TN)/n (26+22)/50 ×100% = 96.00% (21+25)/50 ×100% = 92.00% (17+25)/50 ×100% = 84.00%
Error rate (FP+FN)/n (1+1)/50 ×100% = 4.00% (1+3)/50 ×100% = 8.00% (1+7)/40 ×100% = 16.00%
Sensitivity (true positive rate) TP/(ZN Staining Yes) 26/27 ×100% = 96.30% 21/24 ×100% = 87.50% 17/24 ×100% = 70.83%
False positive rate FP/(ZN Staining No) 1/23 ×100% =4.35% 1/26 ×100% = 3.85% 1/26 ×100% = 3.85%
Specificity (true negative rate) TN/(ZN Staining No) 22/23 ×100% = 95.65% 25/26 ×100% = 96.15% 25/26 ×100% = 96.15%
Precision TP/Predictive Yes 26/27 ×100% = 96.30% 21/22 ×100% = 95.45% 17/18 ×100% = 94.44%
Prevalence TP/n 26/50 ×100% = 52.00% 21/50 ×100% = 42.00% 17/50 ×100% = 34.00%

Results of HGA-FLVQ model training.

TGS813 Sensor data
Iteration Fuzziness parameter Learning rate Cluster center Error
1 1.100590 0.000987 0.001400 0.108494 0.024354 0.0014483775
2 1.101180 0.001541 0.000924 0.140709 0.033404 0.00111976
3 1.101770 0.002089 0.000798 0.163612 0.039210 5.5822276E-4
4 1.102360 0.002508 0.000750 0.177564 0.042487 2.0541892E-4
5 1.102950 0.002808 0.000727 0.186139 0.044436 7.73241E-5
TGS822 Sensor data
1 1.100590 0.001144 0.001169 0.093718 0.212932 0.004191581
2 1.101180 0.001008 0.001355 0.097408 0.228077 2.4299692E-4
3 1.101770 0.000919 0.001562 0.102777 0.238711 1.41901E-4
4 1.102360 0.000857 0.001782 0.107080 0.248823 1.207616E-4
5 1.102950 0.000807 0.002042 0.110949 0.259704 1.3338469E-4
6 1.103540 0.000769 0.002332 0.114258 0.270786 1.3375138E-4
7 1.104130 0.000742 0.002620 0.116887 0.280968 1.105932E-4
8 1.104720 0.000723 0.002887 0.118887 0.289624 7.890986E-5
TGS2611 Sensor data
1 1.100590 0.000644 0.005906 0.031570 0.029312 0.0014235964
2 1.101180 0.001916 0.000857 0.055587 0.020714 6.50747E-4
3 1.101770 0.002333 0.000767 0.081313 0.015552 6.8849424E-4
4 1.102360 0.003203 0.000704 0.096225 0.017693 2.2693272E-4
5 1.102950 0.003725 0.000683 0.103321 0.018730 5.1426956E-5

Confusion matrix of LVQ method using ZN staining as Gold standard.

n = 50 Predicted : No Predicted: Yes
ZN Staining: No TN=25 FP=1 26
ZN Staining: Yes FN=3 TP=21 24
28 22

Testing result of a positive ZN staining patient using HGA-FLVQ model.

Amplitude order Target class Class distance 1 Class distance 2 Prediction class
1 2 0.369 0.424 1
2 2 0.357 0.392 1
3 2 0.285 0.238 2
4 2 0.262 0.177 2
5 2 0.337 0.318 2
6 2 0.317 0.150 2
7 2 0.321 0.076 2
8 2 0.310 0.087 2
9 2 0.307 0.090 2
10 2 0.290 0.107 2
11 2 0.286 0.111 2
12 2 0.305 0.092 2
13 2 0.286 0.111 2
14 2 0.284 0.113 2
15 2 0.2880 0.109 2
16 2 0.292 0.105 2
17 2 0.280 0.117 2
18 2 0.288 0.111 2
19 2 0.289 0.112 2
20 2 0.280 0.117 2

Final confusion matrix of FLVQ method after a re-examination by ZN staining.

n = 50 Predicted: No Predicted: Yes
ZN Staining: No TN=22 FP=4 26
ZN Staining: Yes FN=7 TP=17 24
29 21

Previous researches.

Reference Sample type Classification methods Results
Fend et al. (2006) Sputum Back propagation artificial neural network Sensitivity 89.09% Specificity 91.14%
Gibson et al. (2009) Sputum Linear discriminant analysis Average Sensitivity 80% Average Specificity 75%
Kolk et al. (2010) Sputum Rob electronic-nose: Linear Discriminant Analysis Partial least square discriminant analysis Sensitivity 57-64% Specificity 61–70% Sensitivity 42–50% Specificity 73–77%
Walter electronic-nose: Linear discriminant analysis Partial least square discriminant analysis Sensitivity 56-66% Specificity 65–68% Sensitivity 37–61% Specificity 56–67%
eISSN:
1178-5608
Language:
English
Publication timeframe:
Volume Open
Journal Subjects:
Engineering, Introductions and Overviews, other