COMPUTING SYSTEMS AND NETWORKS
INTELLIGENCE SYSTEMS AND TECHNOLOGIES
A. V. Trusov, E. E. Limonova, V. V. Arlazarov, A. A. Zatsarinnyy Vulnerability Analysis of Neural Networks in Computer Vision
APPLIED ASPECTS OF COMPUTER SCIENCE
SOFTWARE ENGINEERING
DATA PROCESSING AND ANALYSIS
MATHEMATICAL MODELING
MATHEMATICAL FOUNDATIONS OF INFORMATION TECHNOLOGY
A. V. Trusov, E. E. Limonova, V. V. Arlazarov, A. A. Zatsarinnyy Vulnerability Analysis of Neural Networks in Computer Vision
Abstract. 

The work considers the actual problem of the vulnerability of artificial intelligence technologies based on neural networks. We show that the use of neural networks generates many vulnerabilities. We demonstrate specific examples of such vulnerabilities: incorrect classification of images containing adversarial noise or patches, failure of recognition systems in the presence of special patterns on the image, including those applied to objects in the real world, training data poisoning, etc. Based on the analysis, we show the need to improve the security of artificial intelligence technologies and suggest some considerations that contribute to this improvement.

Keywords: 

neural networks, attacks on neural networks, adversarial images, neural network security.

PP. 49-58.

DOI 10.14357/20718632230405 

EDN BVJZWS
 
References

1. Ye M., Shen J., Lin G., Xiang T., Shao L., Hoi S.C. Deep learning for person re-identification: A survey and outlook. IEEE transactions on pattern analysis and machine intelligence. 2021 Jan 26;44(6):2872-93. doi: 10.1109/TPAMI.2021.3054775.
2. Arlazarov V.V., Andreeva E.I., Bulatov K.B., Nikolaev D.P., Petrova O.O., Savelev B.I., Slavin O.A. Document image analysis and recognition: A survey. Computer Optics. 2022; 46(4):567–589. doi: 10.18287/2412-6179-CO-1020.
3. Yang B., Cao X., Xiong K., Yuen C., Guan Y.L., Leng S., Qian L., Han Z. Edge intelligence for autonomous driving in 6G wireless system: Design challenges and solutions. IEEE Wireless Communications. 2021 Apr;28(2):40-7. doi:10.1109/MWC.001.2000292.
4. Gu T., Dolan-Gavitt B., Garg S. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733. 2017 Aug 22.
5. Fredrikson M., Jha S., Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures. InProceedings of the 22nd ACM SIGSAC conference on computer and communications security 2015 Oct 12 (pp. 1322-1333).
6. Szegedy C., Zaremba W., Sutskever I., Bruna J., Erhan D., Goodfellow I., Fergus R. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. 2013 Dec 21.
7. Brown TB, Mané D, Roy A, Abadi M, Gilmer J. Adversarial patch. arXiv preprint arXiv:1712.09665. 2017 Dec 27.
8. Lin C.S., Hsu C.Y., Chen P.Y., Yu C.M. Real-world adversarial examples via makeup. InICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2022 May 23 (pp. 2854-2858). IEEE. doi:10.1109/ICASSP43922.2022.9747469.
9. Hu S., Liu X., Zhang Y., Li M., Zhang L.Y., Jin H., Wu L. Protecting facial privacy: Generating adversarial identity masks via style-robust makeup transfer. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022 (pp. 15014-15023).
10. Zolfi A., Avidan S., Elovici Y., Shabtai A. Adversarial Mask: Real-World Universal Adversarial Attack on Face Recognition Models. InJoint European Conference on Machine Learning and Knowledge Discovery in Databases 2022 Sep 19 (pp. 304-320). Cham: Springer Nature Switzerland.
11. Zhou Z, Tang D, Wang X, Han W, Liu X, Zhang K. Invisible mask: Practical attacks on face recognition with infrared. arXiv preprint arXiv:1803.04683. 2018 Mar 13.
12. Wu Z., Lim S.N., Davis L.S., Goldstein T. Making an invisibility cloak: Real world adversarial attacks on object detectors. InComputer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV 16 2020 (pp. 1-17). Springer International Publishing.
13. Thys S., Van Ranst W., Goedemé T. Fooling automated surveillance cameras: adversarial patches to attack person detection. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops 2019 (pp. 0-0).
14. Chen J., Chen H., Chen K., Zhang Y., Zou Z., Shi Z. Diffusion Models for Imperceptible and Transferable Adversarial Attack. arXiv preprint arXiv:2305.08192. 2023 May 14.
15. Hong S., Davinroy M., Kaya Y., Locke S.N., Rackow I., Kulda K., Dachman-Soled D., Dumitraş T. Security analysis of deep neural networks operating in the presence of cache side-channel attacks. arXiv preprint arXiv:1810.03487. 2018 Oct 8.
16. Oh S.J., Schiele B., Fritz M. Towards reverse-engineering black-box neural networks. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. 2019:121-44.
17. Chmielewski Ł., Weissbart L. On reverse engineering neural network implementation on gpu. InApplied Cryptography and Network Security Workshops: ACNS 2021 Satellite Workshops, AIBlock, AIHWS, AIoTS, CIMSS, Cloud S&P, SCI, SecMT, and SiMLA, Kamakura, Japan, June 21–24, 2021, Proceedings 2021 (pp. 96-113). Springer International Publishing.
18. Goldblum M., Tsipras D., Xie C., Chen X., Schwarzschild A., Song D., Mądry A., Li B., Goldstein T. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2022 Mar 25;45(2):1563-80. doi:10.1109/TPAMI.2022.3162397.
19. Shafahi A., Huang W.R., Najibi M., Suciu O., Studer C., Dumitras T., Goldstein T. Poison frogs! targeted clean-label poisoning attacks on neural networks. Advances in neural information processing systems. 2018;31.
20. Wang Y., Deng J., Guo D., Wang C., Meng X., Liu H., Ding C., Rajasekaran S. Sapag: A self-adaptive privacy attack from gradients. arXiv preprint arXiv:2009.06228. 2020 Sep 14.
21. Warr K. Strengthening deep neural networks: Making AI less susceptible to adversarial trickery. O'Reilly Media; 2019 Jul 3.
22. Long T., Gao Q., Xu L., Zhou Z. A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions. Computers & Security. 2022 Jul 22:102847.
23. Akhtar N., Mian A. Threat of adversarial attacks on deep learning in computer vision: A survey. Ieee Access. 2018 Feb 19;6:14410-30. doi:10.1109/ACCESS.2018.2807385.
24. Machado G.R., Silva E., Goldschmidt R.R. Adversarial machine learning in image classification: A survey toward the defender’s perspective. ACM Computing Surveys (CSUR). 2021 Nov 23;55(1):1-38.
25. Ren K., Zheng T., Qin Z., Liu X. Adversarial attacks and defenses in deep learning. Engineering. 2020 Mar 1;6(3):346-60.
26. Zhang X., Zhang X., Sun M., Zou X., Chen K., Yu N. Imperceptible black-box waveform-level adversarial attack towards automatic speaker recognition. Complex & Intelligent Systems. 2023 Feb;9(1):65-79.
27. Kwon H., Lee S. Ensemble transfer attack targeting text classification systems. Computers & Security. 2022 Jun 1;117:102695.
28. Mo K., Tang W., Li J., Yuan X. Attacking deep reinforcement learning with decoupled adversarial policy. IEEE Transactions on Dependable and Secure Computing. 2022 Jan 18;20(1):758-68.
29. Zhou X., Liang W., Li W., Yan K., Shimizu S., Kevin I., Wang K. Hierarchical adversarial attacks against graphneural- network-based IoT network intrusion detection system. IEEE Internet of Things Journal. 2021 Nov 24;9(12):9310-9. doi:10.1109/JIOT.2021.3130434.
30. Kumar R.S., Nyström M., Lambert J., Marshall A., Goertzel M., Comissoneru A., Swann M., Xia S. Adversarial machine learning-industry perspectives. In2020 IEEE Security and Privacy Workshops (SPW) 2020 May 21 (pp. 69-75). IEEE. doi:10.1109/SPW50608.2020.00028.
31. Paleyes A., Urma R.G., Lawrence N.D. Challenges in deploying machine learning: a survey of case studies. ACM Computing Surveys. 2022 Dec 7;55(6):1-29.
32. Ala-Pietilä P., Bonnet Y., Bergmann U., Bielikova M., Bonefeld-Dahl C., Bauer W., Bouarfa L., Chatila R., Coeckelbergh M., Dignum V., Gagné J.F. The assessment list for trustworthy artificial intelligence (ALTAI). European Commission; 2020 Jul 17.
33. Musser M, Lohn A, Dempsey JX, Spring J, Kumar RS, Leong B, Liaghati C, Martinez C, Grant CD, Rohrer D, Frase H. Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and Legal Implications. arXiv preprint arXiv:2305.14553. 2023 May 23.
34. The Record. Facial recognition’s latest foe: Italian knitwear. Available from: https://therecord.media/facial-recognitionslatest-
foe-italian-knitwear [Accessed 20 July 2023]
35. Habr. How we fight content copying, or the first adversarial attack in prod. Available from:https://habr.com/ru/companies/avito/articles/452142
[Accessed 20 July 2023]
36. Povolny S., Trivedi S. Model hacking ADAS to pave safer roads for autonomous vehicles. McAfee Blogs. Available
from: https://www.mcafee.com/blogs/other-blogs/mcafeelabs/model-hacking-adas-to-pave-safer-roads-forautonomous-vehicles/ [Accessed 20 July 2023]
 

2024 / 01
2023 / 04
2023 / 03
2023 / 02

© ФИЦ ИУ РАН 2008-2018. Создание сайта "РосИнтернет технологии".