Accès libre

DefenseFea: An Input Transformation Feature Searching Algorithm Based Latent Space for Adversarial Defense

À propos de cet article

Citez

Akhtar N, Mian A, Kardan N, et al., Advances in adversarial attacks and defenses in computer vision: A survey, IEEE Access, 9, 2021, 155161-155196. Search in Google Scholar

Bai Y, Zeng Y, Jiang Y, et al., Improving adversarial robustness via channel-wise activation suppressing, arXiv preprint arXiv, 2021, 2103.08307. Search in Google Scholar

Byun J, Cho S, Kwon M J, et al., Improving the transferability of targeted adversarial examples through object-based diverse input, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 15244-15253. Search in Google Scholar

Carlini N, Wagner D., Towards evaluating the robustness of neural networks, 2017 ieee symposium on security and privacy (sp), 2017, 39-57. Search in Google Scholar

Chen Z, Li B, Xu J, et al., Towards practical certifiable patch defense with vision transformer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 15148–15158. Search in Google Scholar

Dai T, Feng Y, Wu D, et al., DIPDefend: Deep image prior driven defense against adversarial examples, Proceedings of the 28th ACM International Conference on Multimedia, 2020, 1404-1412. Search in Google Scholar

Das N, Shanbhogue M, Chen S T, et al., Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression, arXiv preprint arXiv, 2017, 1705.02900. Search in Google Scholar

Deng Z, Yang X, Xu S, et al., Libre: A practical bayesian approach to adversarial detection, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, 972-982. Search in Google Scholar

Dong J, Moosavi-Dezfooli S M, Lai J, et al., The enemy of my enemy is my friend: Exploring inverse adversaries for improving adversarial training, arXiv preprint arXiv, 2022, 2211.00525. Search in Google Scholar

Goodfellow I J, Shlens J, Szegedy C., Explaining and harnessing adversarial examples, arXiv preprint arXiv, 2014, 1412.6572. Search in Google Scholar

He K, Zhang X, Ren S, et al., Deep residual learning for image recognition, Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, 770-778. Search in Google Scholar

He K, Zhang X, Ren S, et al., Identity mappings in deep residual networks, European conference on computer vision, 2016, 630-645. Search in Google Scholar

Hu S, Liu X, Zhang Y, et al., Protecting facial privacy: generating adversarial identity masks via style-robust makeup transfer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 15014-15023. Search in Google Scholar

Hu Z, Huang S, Zhu X, et al., Adversarial texture for fooling person detectors in the physical world, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 13307-13316. Search in Google Scholar

Kurakin A, Goodfellow I, Bengio S., Adversarial machine learning at scale, arXiv preprint arXiv, 2016, 1611.01236. Search in Google Scholar

Li T, Wu Y, Chen S, et al., Subspace adversarial training, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 13409-13418. Search in Google Scholar

Madry A, Makelov A, Schmidt L, et al., Towards deep learning models resistant to adversarial attacks, arXiv preprint arXiv, 2017, 1706.06083. Search in Google Scholar

Moosavi-Dezfooli S M, Fawzi A, Frossard P., Deepfool: a simple and accurate method to fool deep neural networks, Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, 2574-2582. Search in Google Scholar

Papernot N, McDaniel P, Wu X, et al., Distillation as a defense to adversarial perturbations against deep neural networks, 2016 IEEE symposium on security and privacy (SP), 2016, 582-597. Search in Google Scholar

Qin Y, Frosst N, Sabour S, et al., Detecting and diagnosing adversarial images with class-conditional capsule reconstructions, arXiv preprint arXiv, 2019, 1907.02957. Search in Google Scholar

Russakovsky O, Deng J, Su H, et al., Imagenet large scale visual recognition challenge, International journal of computer vision, 115, 3, 2015, 211-252. Search in Google Scholar

Sato T, Shen J, Wang N, et al., Dirty road can attack: Security of deep learning based automated lane centering under {Physical-World} attack, 30th USENIX Security Symposium (USENIX Security 21), 2021, 3309-3326. Search in Google Scholar

Suryanto N, Kim Y, Kang H, et al., Dta: Physical camouflage attacks using differentiable transformation network, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 15305-15314. Search in Google Scholar

Szegedy C, Vanhoucke V, Ioffe S, et al., Rethinking the inception architecture for computer vision, Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, 2818-2826. Search in Google Scholar

Szegedy C, Zaremba W, Sutskever I, et al., Intriguing properties of neural networks, arXiv preprint arXiv, 2013, 1312.6199. Search in Google Scholar

Tramèr F, Kurakin A, Papernot N, et al., Ensemble adversarial training: Attacks and defenses, arXiv preprint arXiv, 2017, 1705.07204. Search in Google Scholar

Wang J, Liu A, Yin Z, et al., Dual attention suppression attack: Generate adversarial camouflage in physical world, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, 8565-8574. Search in Google Scholar

Xie C, Wu Y, Maaten L, et al., Feature denoising for improving adversarial robustness, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, 501-509. Search in Google Scholar

Yan H, Zhang J, Niu G, et al., CIFS: Improving adversarial robustness of cnns via channel-wise importance-based feature selection, International Conference on Machine Learning, PMLR, 2021, 11693-11703. Search in Google Scholar

Yuan J, He Z., Ensemble generative cleaning with feedback loops for defending adversarial attacks, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, 581-590. Search in Google Scholar

Zhang T, Zhu Z., Interpreting adversarially trained convolutional neural networks, International Conference on Machine Learning, PMLR, 2019, 7502-7511. Search in Google Scholar

Zhong Y, Liu X, Zhai D, et al., Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 15345-15354. Search in Google Scholar

eISSN:
2300-3405
Langue:
Anglais
Périodicité:
4 fois par an
Sujets de la revue:
Computer Sciences, Artificial Intelligence, Software Development