Open Access

An Empirical Study of a Simple Incremental Classifier Based on Vector Quantization and Adaptive Resonance Theory


Cite

Alcalá-Fdez, J., Fernandez, A., Luengo, J., Derrac, J., García, S., Sanchez, L. and Herrera, F. (2011). KEEL data-mining software tool: Data set repository, integration of algorithms and experimental analysis framework, Journal of Multiple-Valued Logic and Soft Computing 17(2–3): 255–287.Search in Google Scholar

Altman, N.S. (1992). An introduction to kernel and nearest-neighbor nonparametric regression, The American Statistician 46(3): 175–185.Search in Google Scholar

Banerjee, S., Bhattacharjee, P. and Das, S. (2017). Performance of deep learning algorithms vs. shallow models, in extreme conditions—Some empirical studies, in B.U. Shankar et al. (Eds), Pattern Recognition and Machine Intelligence, Springer International Publishing, Cham, pp. 565–574.Search in Google Scholar

Bifet, A. and Gavaldà, R. (2009). Adaptive learning from evolving data streams, in N.M Adams et al. (Eds), Advances in Intelligent Data Analysis VIII, Springer, Berlin/Heidelberg, pp. 249–260.Search in Google Scholar

Breiman, L. (2001). Random forests, Machine Learning 45(1): 5–32.Search in Google Scholar

Breiman, L., Friedman, J.H., Olshen, R.A. and Stone, C.J. (1984). Classification and Regression Trees, Wadsworth International Group, Belmont.Search in Google Scholar

Carpenter, G.A. and Grossberg, S. (1987). A massively parallel architecture for a self-organizing neural pattern recognition machine, Computer Vision, Graphics, and Image Processing 37(1): 54–115.Search in Google Scholar

Carpenter, G.A., Grossberg, S. and Reynolds, J.H. (1991). ARTMAP: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network, Neural Networks 4(5): 565–588.Search in Google Scholar

Carpenter, G., Grossberg, S., Markuzon, N., Reynolds, J.H. and Rosen, D.B. (1992). Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of analog multidimensional maps, IEEE Transactions on Neural Networks 3(5): 698–713.Search in Google Scholar

Chan, T.F., Golub, G.H. and LeVeque, R.J. (1979). Updating formulae and an pairwise algorithm for computing sample variances, Stanford Working Paper STAN-CS-79-773: 1–22, Stanford University, Stanford, http://i.stanford.edu/pub/cstr/reports/cs/tr/79/773/CS-TR-79-773.pdf.Search in Google Scholar

Chang, C.-C. and Lin, C.-J. (2011). LIBSVM: A library for support vector machine, ACM Transactions on Intelligent Systems and Technology 2(3): 1–27.Search in Google Scholar

Chen, T. and Guestrin, C. (2016). XGBoost: A scalable tree boosting system, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, USA, pp. 785–794.Search in Google Scholar

Czmil, S. (2021). Python implementation of evolving vector quantization for classification of on-line data streams (Version 0.0.2), Computer software, https://github.com/sylwekczmil/evq.Search in Google Scholar

Czmil, S., Kluska, J. and Czmil, A. (2022). CACP: Classification algorithms comparison pipeline, SoftwareX 19: 101134.Search in Google Scholar

Duch, W., Adamczak, R. and Diercksen, G.H.F. (2000). Classification, association and pattern completion using neural similarity based methods, International Journal of Applied Mathematics and Computer Science 10(4): 747–766.Search in Google Scholar

Elsayad, A.M. (2009). Classification of ECG arrhythmia using learning vector quantization neural networks, 2009 International Conference on Computer Engineering & Systems, Cairo, Egypt, pp. 139–144.Search in Google Scholar

Fernández-Delgado, M., Cernadas, E., Barro, S. and Amorim, D. (2014). Do we need hundreds of classifiers to solve real world classification problems?, Journal of Machine Learning Research 15(90): 3133–3181.Search in Google Scholar

Friedman, J.H. (2001). Greedy function approximation: A gradient boosting machine, The Annals of Statistics 29(5): 1189–1232.Search in Google Scholar

Galbraith, B. (2017). Adaptive resonance theory models, Computer software, https://github.com/AIOpenLab/art.Search in Google Scholar

Gomes, H.M., Bifet, A., Read, J., Barddal, J.P., Enembreck, F., Pfharinger, B., Holmes, G. and Abdessalem, T. (2017). Adaptive random forests for evolving data stream classification, Machine Learning 106(9–10): 1469–1495.Search in Google Scholar

Hastie, T., Rosset, S., Zhu, J. and Zou, H. (2009). Multi-class AdaBoost, Statistics and Its Interface 2(3): 349–360.Search in Google Scholar

Hastie, T., Tibshirani, R. and Friedman, J. (2008). The Elements of Statistical Learning, Springer, New York.Search in Google Scholar

Holte, R.C. (1993). Very simple classification rules perform well on most commonly used data sets, Machine Learning 11(1): 63–90.Search in Google Scholar

Huang, J. and Ling, C.X. (2005). Using AUC and accuracy in evaluating learning algorithms, IEEE Transactions on Knowledge and Data Engineering 17(3): 299–310.Search in Google Scholar

Hulten, G., Spencer, L. and Domingos, P. (2001). Mining time-changing data streams, Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD’01, San Francisco, USA, pp. 97–106.Search in Google Scholar

James, G., Witten, D., Hastie, T. and Tibshirani, R. (2013). An Introduction to Statistical Learning: With Applications in R, Springer, New York.Search in Google Scholar

Kasuba, T. (1993). Simplified fuzzy ARTMAP, AI Expert 8: 18–25.Search in Google Scholar

Kingma, D.P. and Ba, J. (2015). Adam: A method for stochastic optimization, Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), San Diego, USA.Search in Google Scholar

Kluska, J. and Madera, M. (2021). Extremely simple classifier based on fuzzy logic and gene expression programming, Information Sciences 571: 560–579.Search in Google Scholar

Kohonen, T., Schroeder, M.R. and Huang, T.S. (2001). Self-Organizing Maps, 3rd Edn, Springer-Verlag, Berlin/Heidelberg.Search in Google Scholar

Kolter, J.Z. and Maloof, M.A. (2005). Using additive expert ensembles to cope with concept drift, Proceedings of the 22nd International Conference on Machine Learning (ICML-2005), Bonn, Germany, pp. 449–456.Search in Google Scholar

Kolter, J.Z. and Maloof, M.A. (2007). Dynamic weighted majority: An ensemble method for drifting concepts, Journal of Machine Learning Research 8(91): 2755–2790.Search in Google Scholar

Kulczycki, P. and Kowalski, P.A. (2015). Bayes classification for nonstationary patterns, International Journal of Computational Methods 12(02): 1550008.Search in Google Scholar

Kusy, M. and Zajdel, R. (2021). A weighted wrapper approach to feature selection, International Journal of Applied Mathematics and Computer Science 31(4): 685–696, DOI: 10.34768/amcs-2021-0047.Search in Google Scholar

Lang, K.J. and Witbrock, M.J. (1988). Learning to tell two spirals apart, The 1988 Connectionist Models Summer School, Pittsburgh, USA, pp. 52–59.Search in Google Scholar

Lee, S., Chang, K. and Baek, J.-G. (2021). Incremental learning using generative-rehearsal strategy for fault detection and classification, Expert Systems with Applications 184: 115477.Search in Google Scholar

Leo, J. and Kalita, J. (2022). Incremental deep neural network learning using classification confidence thresholding, IEEE Transactions on Neural Networks and Learning Systems 33(12): 7706–7716.Search in Google Scholar

Lughofer, E. (2008a). Evolving vector quantization for classification of on-line data streams, 2008 International Conference on Computational Intelligence for Modelling Control & Automation (CIMCA 2008), Vienna, Austria, pp. 779–784.Search in Google Scholar

Lughofer, E. (2008b). Extensions of vector quantization for incremental clustering, Pattern Recognition 41(3): 995–1011.Search in Google Scholar

Luo, Y., Yin, L., Bai, W. and Mao, K. (2020). An appraisal of incremental learning methods, Entropy 22(11): 1190.Search in Google Scholar

Manapragada, C., Webb, G.I. and Salehi, M. (2018). Extremely fast decision tree, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, pp. 1953—-1962.Search in Google Scholar

Montiel, J., Read, J., Bifet, A. and Abdessalem, T. (2018). Scikit-Multiflow: A multi-output streaming framework, Journal of Machine Learning Research 19(72): 1–5.Search in Google Scholar

Oza, N.C. and Russell, S.J. (2001). Online bagging and boosting, in T.S. Richardson and T.S. Jaakkola (Eds), Proceedings of the 8th International Workshop on Artificial Intelligence and Statistics, Key West, USA, pp. 229–236.Search in Google Scholar

Pedregosa, F., Varoquaux, G., Gramfort, A. Michel, V., Thirion, B., Grisel, O., Blondel, M., Müller, A., Nothman, J., Louppe, G., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M. and Duchesnay, E. (2011). Scikit-learn: Machine learning in Python, Journal of Machine Learning Research 12: 2825–2830.Search in Google Scholar

Polikar, R., Upda, L., Upda, S. and Honavar, V. (2001). Learn++: An incremental learning algorithm for supervised neural networks, IEEE Transactions on Systems, Man, and Cybernetics C: Applications and Reviews 31(4): 497–508.Search in Google Scholar

Pratama, M., Pedrycz, W. and Lughofer, E. (2018). Evolving ensemble fuzzy classifier, IEEE Transactions on Fuzzy Systems 26(5): 2552–2567.Search in Google Scholar

Pratama, M., Pedrycz, W. and Webb, G.I. (2020). An incremental construction of deep neuro fuzzy system for continual learning of nonstationary data streams, IEEE Transactions on Fuzzy Systems 28(7): 1315–1328.Search in Google Scholar

Rutkowski, L. and Cierniak, R. (1996). Image compression by competitive learning neural networks and predictive vector quantization, Applied Mathematics and Computer Science 6(3): 431–445.Search in Google Scholar

Shevchuk, Y. (2015). NeuPy (Version 1.18.5), Computer software, http://neupy.com/.Search in Google Scholar

Shi, X., Wong, Y.D., Li, M.Z.-F., Palanisamy, C. and Chai, C. (2019). A feature learning approach based on XGBoost for driving assessment and risk prediction, Accident Analysis & Prevention 129: 170–179.Search in Google Scholar

Skubalska-Rafajlowicz, E. (2000). One-dimensional Kohonen LVQ nets for multidimensional pattern recognition, International Journal of Applied Mathematics and Computer Science 10(4): 767–778.Search in Google Scholar

Sokolova, M. and Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks, Information Processing & Management 45(4): 427–437.Search in Google Scholar

Stapor, K. (2018). Evaluating and comparing classifiers: Review, some recommendations and limitations, in M. Kurzynski et al. (Eds), Proceedings of the 10th International Conference on Computer Recognition Systems, CORES 2017, Springer International Publishing, Cham, pp. 12–21.Search in Google Scholar

Tantithamthavorn, C., McIntosh, S., Hassan, A.E. and Matsumoto, K. (2019). The impact of automated parameter optimization on defect prediction models, IEEE Transactions on Software Engineering 45(7): 683–711.Search in Google Scholar

Tibshirani, R., Hastie, T., Narasimhan, B. and Chu, G. (2002). Diagnosis of multiple cancer types by shrunken centroids of gene expression, Proceedings of the National Academy of Sciences 99(10): 6567–6572.Search in Google Scholar

Trawiński, B., Sm˛etek, M., Telec, Z. and Lasota, T. (2012). Nonparametric statistical analysis for multiple comparison of machine learning regression algorithms, International Journal of Applied Mathematics and Computer Science 22(4): 867–881, DOI: 10.2478/v10006-012-0064-z.Search in Google Scholar

Vakil-Baghmisheh, M. and Pavešić, N. (2003). A fast simplified fuzzy ARTMAP network, Neural Processing Letters 17(3): 273–316.Search in Google Scholar

Villuendas-Rey, Y., Rey-Benguría, C.F., Ángel Ferreira-Santiago, Camacho-Nieto, O. and Yáñez-Márquez, C. (2017). The naïve associative classifier (NAC): A novel, simple, transparent, and accurate classification model evaluated on financial data, Neurocomputing 265: 105–115.Search in Google Scholar

Wolpert, D. and Macready, W. (1997). No free lunch theorems for optimization, IEEE Transactions on Evolutionary Computation 1(1): 67–82.Search in Google Scholar

Żabiński, T., Maczka, T. and Kluska, J. (2017). Industrial platform for rapid prototyping of intelligent diagnostic systems, in W. Mitkowski et al. (Eds), Trends in Advanced Intelligent Control, Optimization and Automation, Springer International Publishing, Cham, pp. 712–721.Search in Google Scholar

Żabiński, T., Maczka, T., Kluska, J., Kusy, M., Hajduk, Z. and Prucnal, S. (2014). Failures prediction in the cold forging process using machine learning methods, in L. Rutkowski et al. (Eds), Artificial Intelligence and Soft Computing, Springer International Publishing, Cham, pp. 622–633.Search in Google Scholar

Škrjanc, I., Iglesias, J.A., Sanchis, A., Leite, D., Lughofer, E. and Gomide, F. (2019). Evolving fuzzy and neuro-fuzzy approaches in clustering, regression, identification, and classification: A survey, Information Sciences 490: 344–368.Search in Google Scholar

eISSN:
2083-8492
Language:
English
Publication timeframe:
4 times per year
Journal Subjects:
Mathematics, Applied Mathematics