1. bookVolumen 12 (2022): Edición 3 (July 2022)
Detalles de la revista
License
Formato
Revista
eISSN
2449-6499
Primera edición
30 Dec 2014
Calendario de la edición
4 veces al año
Idiomas
Inglés
access type Acceso abierto

Towards a Very Fast Feedforward Multilayer Neural Networks Training Algorithm

Publicado en línea: 23 Jul 2022
Volumen & Edición: Volumen 12 (2022) - Edición 3 (July 2022)
Páginas: 181 - 195
Recibido: 03 Jan 2022
Aceptado: 06 Jun 2022
Detalles de la revista
License
Formato
Revista
eISSN
2449-6499
Primera edición
30 Dec 2014
Calendario de la edición
4 veces al año
Idiomas
Inglés
Abstract

**This paper presents a novel fast algorithm for feedforward neural networks training. It is based on the Recursive Least Squares (RLS) method commonly used for designing adaptive filters. Besides, it utilizes two techniques of linear algebra, namely the orthogonal transformation method, called the Givens Rotations (GR), and the QR decomposition, creating the GQR (symbolically we write GR + QR = GQR) procedure for solving the normal equations in the weight update process. In this paper, a novel approach to the GQR algorithm is presented. The main idea revolves around reducing the computational cost of a single rotation by eliminating the square root calculation and reducing the number of multiplications. The proposed modification is based on the scaled version of the Givens rotations, denoted as SGQR. This modification is expected to bring a significant training time reduction comparing to the classic GQR algorithm. The paper begins with the introduction and the classic Givens rotation description. Then, the scaled rotation and its usage in the QR decomposition is discussed. The main section of the article presents the neural network training algorithm which utilizes scaled Givens rotations and QR decomposition in the weight update process. Next, the experiment results of the proposed algorithm are presented and discussed. The experiment utilizes several benchmarks combined with neural networks of various topologies. It is shown that the proposed algorithm outperforms several other commonly used methods, including well known Adam optimizer.

Keywords

[1] O. Abedinia, N. Amjady, and N. Ghadimi. Solar Energy Forecasting Based on Hybrid Neural Network and Improved Metaheuristic Algorithm. Computational Intelligence, 34(1): 241–260, 2018.10.1111/coin.12145 Search in Google Scholar

[2] U.R. Acharya, S.L. Oh, Y. Hagiwara, J.H. Tan, and H. Adeli. Deep Convolutional neural Network for the Automated Detection and Diagnosis of Seizure Using EEG Signals. Computers in Biology and Medicine, 100: 270–278, 2018.10.1016/j.compbiomed.2017.09.01728974302 Search in Google Scholar

[3] I. Aizenberg, D.V. Paliy, J.M. Zurada, and J. T. Astola. Blur Identification by Multilayer Neural Network Based on Multivalued neurons. IEEE Transactions on Neural Networks, 19(5): 883–898, 2008.10.1109/TNN.2007.91415818467216 Search in Google Scholar

[4] E. Angelini, G. di Tollo, and A. Roli. A Neural Network Approach for Credit Risk Evaluation. The Quarterly Review of Economics and Finance, 48(4): 733–755, 2008.10.1016/j.qref.2007.04.001 Search in Google Scholar

[5] J. Bilski. Parallel Structures for Feedforward and Dynamic Neural Networks. (In Polish) Akademicka Oficyna Wydawnicza EXIT, 2013. Search in Google Scholar

[6] J. Bilski and A.I. Galushkin. A New Proposition of the Activation Function for Significant Improvement of Neural Networks Performance. In Artificial Intelligence and Soft Computing, volume 9602 of Lecture Notes in Computer Science, pages 35–45. Springer-Verlag Berlin Heidelberg, 2016.10.1007/978-3-319-39378-0_4 Search in Google Scholar

[7] J. Bilski, B. Kowalczyk, and J.M. Żurada. Application of the Givens Rotations in the Neural Network Learning Algorithm. In Artificial Intelligence and Soft Computing, volume 9602 of Lecture Notes in Artificial Intelligence, pages 46–56. Springer-Verlag Berlin Heidelberg, 2016.10.1007/978-3-319-39378-0_5 Search in Google Scholar

[8] J. Bilski and J. Smoląg. Parallel Realisation of the Recurrent Multi Layer Perceptron Learning. Artificial Intelligence and Soft Computing, Springer-Verlag Berlin Heidelberg, (LNAI 7267): 12–20, 2012.10.1007/978-3-642-29347-4_2 Search in Google Scholar

[9] J. Bilski and J. Smoląg. Parallel Approach to Learning of the Recurrent Jordan Neural Network. Artificial Intelligence and Soft Computing, Springer-Verlag Berlin Heidelberg, (LNAI 7895): 32–40, 2013.10.1007/978-3-642-38658-9_3 Search in Google Scholar

[10] J. Bilski and J. Smoląg. Parallel Architectures for Learning the RTRN and Elman Dynamic Neural Network. IEEE Transactions on Parallel and Distributed Systems, 26(9): 2561–2570, 2015.10.1109/TPDS.2014.2357019 Search in Google Scholar

[11] J. Bilski, J. Smoląg, and A.I. Galushkin. The Parallel Approach to the Conjugate Gradient Learning Algorithm for the Feedforward Neural Networks. In Artificial Intelligence and Soft Computing, volume 8467 of Lecture Notes in Computer Science, pages 12–21. Springer-Verlag Berlin Heidelberg, 2014.10.1007/978-3-319-07173-2_2 Search in Google Scholar

[12] J. Bilski, J. Smoląg, and J.M. Żurada. Parallel Approach to the Levenberg-Marquardt Learning Algorithm for Feedforward Neural Networks. In Artificial Intelligence and Soft Computing, volume 9119 of Lecture Notes in Computer Science, pages 3–14. Springer-Verlag Berlin Heidelberg, 2015.10.1007/978-3-319-19324-3_1 Search in Google Scholar

[13] Jarosław Bilski, Bartosz Kowalczyk, Alina Marchlewska, and Jacek M. Zurada. Local Levenberg-Marquardt algorithm for learning feedforwad neural networks. Journal of Artificial Intelligence and Soft Computing Research, 10(4): 299–316, 2020.10.2478/jaiscr-2020-0020 Search in Google Scholar

[14] Jarosław Bilski, Bartosz Kowalczyk, Andrzej Marjański, Michał Gandor, and Jacek Zurada. A Novel Fast Feedforward Neural Networks Training Algorithm. Journal of Artificial Intelligence and Soft Computing Research, 11(4): 287–306, 2021.10.2478/jaiscr-2021-0017 Search in Google Scholar

[15] A. Cotter, O. Shamir, N. Srebro, and K. Sridharan. Better Mini-batch Algorithms via Accelerated Gradient Methods. CoRR, abs/1106.4574, 2011. Search in Google Scholar

[16] W. Duch, K. Swaminathan, and J. Meller. Artificial Intelligence Approaches for Rational Drug Design and Discovery. Current Pharmaceutical Design, 13(14): 1497–1508, 2007.10.2174/13816120778076595417504169 Search in Google Scholar

[17] John Duchi, Elad Hazan, and Yoram Singer. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, 12: 2121–2159, 07 2011. Search in Google Scholar

[18] W.M. Gentleman. Least Squares Computations by Givens Transformations without Square Roots. IMA Journal of Applied Mathematics, 12(3): 329–336, 12 1973.10.1093/imamat/12.3.329 Search in Google Scholar

[19] Ghosh and Reilly. Credit Card Fraud Detection with a Neural-network. In 1994 Proceedings of the Twenty-Seventh Hawaii International Conference on System Sciences, volume 3, pages 621–630, Jan 1994.10.1109/HICSS.1994.323314 Search in Google Scholar

[20] W. Givens. Computation of Plain Unitary Rotations Transforming a General Matrix to Triangular Form. Journal of The Society for Industrial and Applied Mathematics, 6: 26–50, 1958.10.1137/0106004 Search in Google Scholar

[21] J. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, T. Liu, X. Wang, G. Wang, J. Cai, and T. Chen. Recent Advances in Convolutional Neural Networks. Pattern Recognition, 77: 354–377, 2018.10.1016/j.patcog.2017.10.013 Search in Google Scholar

[22] M.T. Hagan and M.B. Menhaj. Training Feed-forward Networks with the Marquardt Algorithm. IEEE Transactions on Neuralnetworks, 5: 989–993, 1994.10.1109/72.32969718267874 Search in Google Scholar

[23] A. Horzyk and R. Tadeusiewicz. Self-optimizing Neural Networks. In Fu-Liang Yin, Jun Wang, and Chengan Guo, editors, Advances in Neural Networks – ISNN 2004, pages 150–155, Berlin, Heidelberg, 2004. Springer Berlin Heidelberg.10.1007/978-3-540-28647-9_26 Search in Google Scholar

[24] A. Kiełbasiński and H. Schwetlick. Numeryczna Algebra Liniowa: Wprowadzenie do Obliczeń Zautomatyzowanych. Wydawnictwa Naukowo-Techniczne, Warszawa, 1992. Search in Google Scholar

[25] D.P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization, 2014. Search in Google Scholar

[26] Y. Li, R. Cui, Z. Li, and D. Xu. Neural Network Approximation Based Near-optimal Motion Planning with Kinodynamic Constraints Sing Rrt. IEEE Transactions on Industrial Electronics, 65(11): 8718–8729, Nov 2018.10.1109/TIE.2018.2816000 Search in Google Scholar

[27] H. Liu, X. Mi, and Y. Li. Wind Speed Forecasting Method Based on Deep Learning Strategy Using Empirical Wavelet Transform, Long Short Term Memory Neural Network and Elman Neural Network. Energy Conversion and Management, 156: 498–514, 2018.10.1016/j.enconman.2017.11.053 Search in Google Scholar

[28] M. Mazurowski, P. Habas, J. Zurada, J. Lo, J. Baker, and G. Tourassi. Training Neural Network Classifiers for Medical Decision Making: The Effects of Imbalanced Datasets on Classification Performance. Neural networks: the official journal of the International Neural Network Society, 21: 427–36, 03 2008.10.1016/j.neunet.2007.12.031 Search in Google Scholar

[29] Yu. E. Nesterov. A Method for Solving the Convex Programming Problem with Convergence rate O(1/sqr(k)). In Soviet Mathematics Doklady, number 27: 372-376, 1983. Search in Google Scholar

[30] B.T. Polyak. Some Methods of Speeding Up the Convergence of Iteration Methods. USSR Computational Mathematics and Mathematical Physics, 4(5): 1–17, 1964.10.1016/0041-5553(64)90137-5 Search in Google Scholar

[31] R. Shirin. A Neural Network Approach for Retailer Risk Assessment in the Aftermarket Industry. Benchmarking: An International Journal, 26(5): 1631–1647, Jan 2019.10.1108/BIJ-06-2018-0162 Search in Google Scholar

[32] A.K. Singh, S.K. Jha, and A.V. Muley. Candidates Selection Using Artificial Neural Network Technique in a Pharmaceutical Industry. In Siddhartha Bhattacharyya, Aboul Ella Hassanien, Deepak Gupta, Ashish Khanna, and Indrajit Pan, editors, International Conference on Innovative Computing and Communications, pages 359–366, Singapore, 2019. Springer Singapore.10.1007/978-981-13-2354-6_38 Search in Google Scholar

[33] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the Importance of Initialization and Momentum in Deep Learning. In Proceedings of the 30th International Conference on International Conference on Machine Learning -Volume 28, ICML’13, pages III–1139–III–1147. JMLR.org, 2013. Search in Google Scholar

[34] R. Tadeusiewicz, L. Ogiela, and M.R. Ogiela. Cognitive Analysis Techniques in Business Planning and Decision Support Systems. In L. Rutkowski, R. Tadeusiewicz, L.A. Zadeh, and J.M. Żurada, editors, Artificial Intelligence and Soft Computing – ICAISC 2006, pages 1027–1039, Berlin, Heidelberg, 2006. Springer Berlin Heidelberg.10.1007/11785231_108 Search in Google Scholar

[35] K.Y. Tam and M. Kiang. Predicting Bank Failures: A Neural Network Approach. Applied Artificial Intelligence, 4(4): 265–282, 1990. Search in Google Scholar

[36] J. Werbos. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Harvard University, 1974. Search in Google Scholar

[37] B.M. Wilamowski. Neural Network Architectures and Learning Algorithms. IEEE Industrial Electronics Magazine, 3(4): 56–63, 2009.10.1109/MIE.2009.934790 Search in Google Scholar

[38] Matthew D. Zeiler. Adadelta: An Adaptive Learning Rate Method, 2012. Search in Google Scholar

Artículos recomendados de Trend MD

Planifique su conferencia remota con Sciendo