[[1] H. H. Aghdam and E. J. Heravi, Guide to convolutional neural networks: a practical application to traffic-sign detection and classification. Cham, Switzerland: Springer, 2017. https://doi.org/10.1007/978-3-319-57550-610.1007/978-3-319-57550-6]DOI öffnenSearch in Google Scholar
[[2] A. Gibson and J. Patterson, Deep Learning: A Practitioner’s Approach. O’Reilly Media, Inc., 2017.]Search in Google Scholar
[[3] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Computer Vision and Pattern Recognition, arXiv:1409.1556v6 [cs.CV], 2015.]Search in Google Scholar
[[4] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017. https://doi.org/10.1145/306538610.1145/3065386]DOI öffnenSearch in Google Scholar
[[5] P. Tang, H. Wang, and S. Kwong, “G-MS2F: GoogLeNet based multi-stage feature fusion of deep CNN for scene recognition,” Neurocomputing, vol. 225, pp. 188–197, 2017. https://doi.org/10.1016/j.neucom.2016.11.02310.1016/j.neucom.2016.11.023]DOI öffnenSearch in Google Scholar
[[6] V. Campos, B. Jou, and X. Giró-i-Nieto, “From pixels to sentiment: Fine-tuning CNNs for visual sentiment prediction,” Image and Vision Computing, vol. 65, pp. 15–22, 2017. https://doi.org/10.1016/j.imavis.2017.01.01110.1016/j.imavis.2017.01.011]DOI öffnenSearch in Google Scholar
[[7] Z. Bai, L. L. C. Kasun, and G.-B. Huang, “Generic object recognition with local receptive fields based extreme learning machine,” Procedia Computer Science, vol. 53, pp. 39–399, 2015. https://doi.org/10.1016/j.procs.2015.07.31610.1016/j.procs.2015.07.316]DOI öffnenSearch in Google Scholar
[[8] V. V. Romanuke, “Appropriateness of Dropout layers and allocation of their 0.5 rates across convolutional neural networks for CIFAR-10, EEACL26, and NORB datasets,” Applied Computer Systems, vol. 22, no. 1, pp. 54–63, 2017. https://doi.org/10.1515/acss-2017-001810.1515/acss-2017-0018]Search in Google Scholar
[[9] M. Ranzato, C. Poultney, S. Chopra, and Y. L. Cun, “Efficient Learning of Sparse Representations with an Energy-Based Model,” Advances in Neural Information Processing Systems, vol. 19, pp. 1137–1144, 2006.]Search in Google Scholar
[[10] V. V. Romanuke, “Training data expansion and boosting of convolutional neural networks for reducing the MNIST dataset error rate,” Research Bulletin of the National Technical University of Ukraine “Kyiv Polytechnic Institute”, no. 6, pp. 29–34, 2016. https://doi.org/10.20535/1810-0546.2016.6.8411510.20535/1810-0546.2016.6.84115]Search in Google Scholar
[[11] E. Kussul and T. Baidyk, “Improved method of handwritten digit recognition tested on MNIST database,” Image and Vision Computing, vol. 22, no. 12, pp. 971–981, 2004. https://doi.org/10.1016/j.imavis.2004.03.00810.1016/j.imavis.2004.03.008]DOI öffnenSearch in Google Scholar
[[12] P. Date, J. A. Hendler, and C. D. Carothers, “Design index for deep neural networks,” Procedia Computer Science, vol. 88, pp. 131–138, 2016. https://doi.org/10.1016/j.procs.2016.07.41610.1016/j.procs.2016.07.416]DOI öffnenSearch in Google Scholar
[[13] D. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber, “Flexible, high performance convolutional neural networks for image classification,” Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, vol. 2, pp. 1237–1242, 2011. https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-21010.5591/978-1-57735-516-8/IJCAI11-210]DOI öffnenSearch in Google Scholar
[[14] V. V. Romanuke, “Two-layer perceptron for classifying flat scaled-turned-shifted objects by additional feature distortions in training,” Journal of Uncertain Systems, vol. 9, no. 4, pp. 286–305, 2015.]Search in Google Scholar
[[15] V. V. Romanuke, “Boosting ensembles of heavy two-layer perceptrons for increasing classification accuracy in recognizing shifted-turned-scaled flat images with binary features,” Journal of Information and Organizational Sciences, vol. 39, no. 1, pp. 75–84, 2015.]Search in Google Scholar
[[16] V. V. Romanuke, “An attempt of finding an appropriate number of convolutional layers in CNNs based on benchmarks of heterogeneous datasets,” Electrical, Control and Communication Engineering, vol. 14, no. 1, pp. 51–57, 2018. https://doi.org/10.2478/ecce-2018-000610.2478/ecce-2018-0006]DOI öffnenSearch in Google Scholar
[[17] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, pp. 1929–1958, 2014.]Search in Google Scholar
[[18] V. V. Romanuke, “Appropriate number and allocation of ReLUs in convolutional neural networks,” Research Bulletin of the National Technical University of Ukraine “Kyiv Polytechnic Institute”, no. 1, pp. 69–78, 2017. https://doi.org/10.20535/1810-0546.2017.1.8815610.20535/1810-0546.2017.1.88156]Search in Google Scholar
[[19] Z. Liao and G. Carneiro, “A deep convolutional neural network module that promotes competition of multiple-size filters,” Pattern Recognition, vol. 71, pp. 94–105, 2017. https://doi.org/10.1016/j.patcog.2017.05.02410.1016/j.patcog.2017.05.024]DOI öffnenSearch in Google Scholar
[[20] J. Mutch and D. G. Lowe, “Object class recognition and localization using sparse features with limited receptive fields,” International Journal of Computer Vision, vol. 80, no. 1, pp. 45–57, 2008. https://doi.org/10.1007/s11263-007-0118-010.1007/s11263-007-0118-0]DOI öffnenSearch in Google Scholar