Otwarty dostęp

Automatic Classification of Unexploded Ordnance (UXO) Based on Deep Learning Neural Networks (DLNNS)


Zacytuj

D. Ciresan, U. Meier, J. Masci, and J. Schmidhuber, „Multi-column deep neural network for traffic sign classification. Neural Networks”, in The International Joint Conference on Neural Network, IDSIA-USI-SUPSI| Galleria, 2012, doi:10.1016/j.neunet.2012.02.023.Open DOISearch in Google Scholar

Y. Zhao, M. Qi, X. Li, Y. Meng, Y. Yu, and Y. Dong, „P-LPN: Towards real time pedestrian location perception in complex driving scenes”, IEEE Access, t. 8, s. 54730–54740, 2020, doi:10.1109/ACCESS.2020.2981821.Open DOISearch in Google Scholar

E. Byvatov, U. Fechner, J. Sadowski, and G. Schneider, „Comparison of support vector machine and artificial neural network systems for drug/nondrug classification”, J. Chem. Inf. Comput. Sci., t. 43, nr 6, s. 1882–1889, 2003, doi:10.1021/ci0341161.Open DOISearch in Google Scholar

S. Lu, Z. Lu, and Y. Zhang, „Pathological brain detection based on AlexNet and transfer learning”, J. Comput. Sci., t. 30, s. 41–47, 2019, doi:10.1016/j.jocs.2018.11.008.Open DOISearch in Google Scholar

Ø. Midtgaard; R.E. Hansen; P.E. Hagen; and N. Størkersen. “Imaging sensors for autonomous underwater vehicles in military operations”. Proc.SET-169 Military Sensors Symposium. Friedrichshafen, Germany, May 2011.Search in Google Scholar

HELCOM CHEMU, “Report to the 16th Meeting of Helsinki Commission 8-11 March 1994 from the Ad Hoc Working Group on Dumped Chemical Munition”, Danish Environ. Protec. Agency, 1994.Search in Google Scholar

J. Fabisiak and A. Olejnik, „Amunicja chemiczna zatopiona w morzu bałtyckim - poszukiwania i ocena ryzyka-projekt badawczy CHEMSEA (Chemical munitions dumped in the Baltic Sea - search and risk assessment-CHEMSEA research project)”, Pol. Hyperb. Res., s. 25–52, 2012.Search in Google Scholar

“Sea mines Ukraine waters Russia war Black Sea,” The Guardian, 2022. [Online]. Available: www.theguardian.com/world/2022/jul/11/sea-mines-ukraine-waters-russia-war-black-sea. [Accessed: June 21, 2023].Search in Google Scholar

“Pretrained Convolutional Neural Networks,” MathWorks, 2023. [Online]. Available: https://uk.mathworks.com/help/deeplearning/ug/pretrained-convolutional-neural-networks.html. [Accessed: June 21, 2023].Search in Google Scholar

M. Chodnicki; P. Krogulec; M. Żokowski; and N. Sigiel. „Procedures concerning preparations of autonomous underwater systems to operation focused on detection, classification and identification of mine like objects and ammunition”, J. KONBiN, t. 48, nr 1, s. 149–168, 2018, DOI: 10.2478/jok-2018-0051.Search in Google Scholar

Dowództwo Marynarki Wojennej, „Album Min Morskich” (Naval Command, „Sea Mines Album). Gdynia, Polska: MAR. WOJ., Sep 1947.Search in Google Scholar

“Image Colorization Using Generative Adversarial Networks,” Pinterest, 2023. [Online]. Available: https://www.pinterest.co.uk/pin/145944844154595254/. [Accessed: Sep. 14, 2023].Search in Google Scholar

“SNMCMG1 Photos,” Facebook, 2023. [Online]. Available: https://www.facebook.com/snmcmg1/photos/a.464547430274739/2304079142988216/. [Accessed: Oct. 9, 2023].Search in Google Scholar

“Pretrained Convolutional Neural Networks,” MathWorks, 2023. [Online]. Available: https://www.mathworks.com/help/deeplearning/ug/pretrained-convolutional-neural-networks.html?searchHighlight=pretrained%20neural%20networks&s_tid=srchtitle_support_results_1_pretrained%2520neural%2520networks. [Accessed: Sep. 14, 2023].Search in Google Scholar

P. Szymak, P. Piskur, and K. Naus, „The effectiveness of using a pretrained deep learning neural networks for object classification in underwater video”, Remote Sens., t. 12, nr 18, s. 3020, 2020, DOI:10.3390/rs12183020.Search in Google Scholar

“NATO forces clear mines from the Baltic in Open Spirit operation,” NATO, 2021. [Online]. Available: https://mc.nato.int/media-centre/news/2021/nato-forces-clear-mines-from-the-baltic-in-open-spirit-operation. [Accessed: Sep. 14, 2023].Search in Google Scholar

F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, „SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size”, ArXiv Prepr. ArXiv160207360, 2016.Search in Google Scholar

Z. Cui, C. Tang, Z. Cao, and N. Liu, „D-ATR for SAR images based on deep neural networks”, Remote Sens., t. 11, nr 8, s. 906, 2019, doi:10.3390/rs11080906.Open DOISearch in Google Scholar

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, „Rethinking the inception architecture for computer vision”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, s. 2818–2826, doi:10.1109/CVPR.2016.308.Open DOISearch in Google Scholar

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, „Densely connected convolutional networks”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, s. 4700–4708.Search in Google Scholar

M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, „Mobilenetv2: Inverted residuals and linear bottlenecks”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, s. 4510–4520, doi:10.1109/CVPR.2018.00474.Open DOISearch in Google Scholar

K. He, X. Zhang, S. Ren, and J. Sun, „Deep residual learning for image recognition”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, s. 770–778, doi:10.1109/CVPR.2016.90.Open DOISearch in Google Scholar

F. Chollet, „Xception: Deep learning with depthwise separable convolutions”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, s. 1251–1258, doi:10.1109/CVPR.2017.195.Open DOISearch in Google Scholar

K. Nazeri, E. Ng, and M. Ebrahimi, „Image colorization using generative adversarial networks”, in Articulated Motion and Deformable Objects: 10th International Conference, AMDO 2018, Palma de Mallorca, Spain, July 12-13, 2018, Proceedings 10, Springer, 2018, s. 85–94, doi: 10.1007/978-3-319-94544-6_9.Open DOISearch in Google Scholar

X. Zhang, X. Zhou, M. Lin, and J. Sun, „Shufflenet: An extremely efficient convolutional neural network for mobile devices”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, s. 6848–6856, doi:10.1109/CVPR.2018.00716.Open DOISearch in Google Scholar

B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, „Learning transferable architectures for scalable image recognition”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, s. 8697–8710, doi:10.1109/CVPR.2018.00907.Open DOISearch in Google Scholar

O. Russakovsky; J. Deng; H. Su; J. Krause; S. Satheesh; S. Ma; Z. Huang; A. Karpathy; A. Khosla; M. Bernstein et al. „Imagenet large scale visual recognition challenge”. Int. J. Comput. Vis., t. 115, s. 211–252, 2015, doi:10.1007/s11263-015-0816-y.Open DOISearch in Google Scholar

K. Simonyan and A. Zisserman, „Very deep convolutional networks for large-scale image recognition”, ArXiv Prepr. ArXiv14091556, 2014.Search in Google Scholar

W. Wu, L. Guo, H. Gao, Z. You, Y. Liu, and Z. Chen, „YOLO-SLAM: A semantic SLAM system towards dynamic environment with geometric constraint”, Neural Comput. Appl., s. 1–16, 2022, doi:10.1007/s00521-021-06764-3.Open DOISearch in Google Scholar

Ü. Atila, M. Uçar, K. Akyol, and E. Uçar, „Plant leaf disease classification using EfficientNet deep learning model”, Ecol. Inform., t. 61, s. 101182, 2021, doi:10.1016/j.ecoinf.2020.101182.Open DOISearch in Google Scholar

eISSN:
2083-7429
Język:
Angielski
Częstotliwość wydawania:
4 razy w roku
Dziedziny czasopisma:
Engineering, Introductions and Overviews, other, Geosciences, Atmospheric Science and Climatology, Life Sciences