[[1] K. Hagiwara and K. Fukumizu, “Relation Between Weight Size and Degree of Over-Fitting in Neural Network Regression,” Neural Networks, vol. 21, no. 1, pp. 48–58, Jan. 2008. https://doi.org/10.1016/j.neunet.2007.11.00110.1016/j.neunet.2007.11.00118206348]Search in Google Scholar
[[2] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, Improving Neural Networks by Preventing Co-Adaptation of Feature Detectors, 2012.]Search in Google Scholar
[[3] H. Wu and X. Gu, “Towards Dropout Training for Convolutional Neural Networks,” Neural Networks, vol. 71, pp. 1–10, Nov. 2015. https://doi.org/10.1016/j.neunet.2015.07.00710.1016/j.neunet.2015.07.00726277608]Search in Google Scholar
[[4] A. Iosifidis, A. Tefas, and I. Pitas, “DropELM: Fast Neural Network Regularization with Dropout and DropConnect,” Neurocomputing, vol. 162, pp. 57–66, Aug. 2015. https://doi.org/10.1016/j.neucom.2015.04.00610.1016/j.neucom.2015.04.006]Search in Google Scholar
[[5] M. Elleuch, R. Maalej, and M. Kherallah, “A New Design Based-SVM of the CNN Classifier Architecture with Dropout for Offline Arabic Handwritten Recognition,” Procedia Computer Science, vol. 80, pp. 1712–1723, 2016. https://doi.org/10.1016/j.procs.2016.05.51210.1016/j.procs.2016.05.512]Search in Google Scholar
[[6] W. Sun and F. Su, “A Novel Companion Objective Function for Regularization of Deep Convolutional Neural Networks,” Image and Vision Computing, vol. 60, pp. 58–63, Apr. 2017. https://doi.org/10.1016/j.imavis.2016.11.01210.1016/j.imavis.2016.11.012]Search in Google Scholar
[[7] V. V. Romanuke, “Training Data Expansion and Boosting of Convolutional Neural Networks for Reducing the MNIST Dataset Error Rate,” Research Bulletin of NTUU “Kyiv Polytechnic Institute”, no. 6, pp. 29–34, Dec. 2016. https://doi.org/10.20535/1810-0546.2016.6.8411510.20535/1810-0546.2016.6.84115]Search in Google Scholar
[[8] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Journal of Machine Learning Research, vol. 15, pp. 1929–1958, 2014.]Search in Google Scholar
[[9] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going Deeper with Convolutions,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015. https://doi.org/10.1109/cvpr.2015.729859410.1109/CVPR.2015.7298594]Search in Google Scholar
[[10] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, May 2017. https://doi.org/10.1145/306538610.1145/3065386]Search in Google Scholar
[[11] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” 5th International Conference on Learning Representations (ICLR 2015), 2015.]Search in Google Scholar
[[12] J. Kim, O. Sangjun, Y. Kim, and M. Lee, “Convolutional Neural Network with Biologically Inspired Retinal Structure,” Procedia Computer Science, vol. 88, pp. 145–154, 2016. https://doi.org/10.1016/j.procs.2016.07.41810.1016/j.procs.2016.07.418]Search in Google Scholar
[[13] D. C. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber, “Flexible, High Performance Convolutional Neural Networks for Image Classification,” Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, vol. 2, pp. 1237–1242, 2011. https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-210]Search in Google Scholar
[[14] P. Date, J. A. Hendler, and C. D. Carothers, “Design Index for Deep Neural Networks,” Procedia Computer Science, vol. 88, pp. 131–138, 2016. https://doi.org/10.1016/j.procs.2016.07.41610.1016/j.procs.2016.07.416]Search in Google Scholar
[[15] V. V. Romanuke, “Two-Layer Perceptron for Classifying Flat Scaled-Turned-Shifted Objects by Additional Feature Distortions in Training,” Journal of Uncertain Systems, vol. 9, no. 4, pp. 286–305, 2015.]Search in Google Scholar
[[16] V. V. Romanuke, “Boosting Ensembles of Heavy Two-Layer Perceptrons for Increasing Classification Accuracy in Recognizing Shifted-Turned-Scaled Flat Images with Binary Features,” Journal of Information and Organizational Sciences, vol. 39, no. 1, pp. 75–84, 2015.]Search in Google Scholar
[[17] E. Kussul and T. Baidyk, “Improved Method of Handwritten Digit Recognition Tested on MNIST Database,” Image and Vision Computing, vol. 22, no. 12, pp. 971–981, Oct. 2004. https://doi.org/10.1016/j.imavis.2004.03.00810.1016/j.imavis.2004.03.008]Search in Google Scholar