Investigation of Adversarial Attacks on Pattern Recognition Neural Networks
https://doi.org/10.15514/ISPRAS-2023-35(2)-3
Abstract
This article discusses the algorithm for creating a neural network based on pattern recognition. Several types of attacks on neural networks are considered, the main features of such attacks are described. An analysis of the Adversarial attack was carried out. The results of experimental testing of the proposed attack are presented. Confirmation of the hypothesis about the decrease in the accuracy of recognition of the neural network during the implementation of the attack by an attacker was obtained.
About the Authors
Denis Vladimirovich KOTLYAROVRussian Federation
Student
Gleb Dmitrievich DYUDYUN
Russian Federation
Student
Natalya Vitalievna RZHEVSKAYA
Russian Federation
Student
Maria Anatolyevna LAPINA
Russian Federation
Candidate of Physical and Mathematical Sciences, Associate Professor of the Department of Information Security of Automated Systems
Mikhail BABENKO
Russian Federation
Doctor of Physical and Mathematical Sciences, Head of the Department of Computational Mathematics and Cybernetics
References
1. Marshalko G. Attacks on biometric systems. Information Security (in Russian) / Маршалко Г. Атаки на биометрические системы. Information Security. Available at: https://www.itsec.ru/articles/ataka-na-biometricheskie-sistemy, accessed 30.03,2023.
2. Ivanyuk V. Neural Networks and Their Analysis. Chronoeconomics, 2021, issue 4, pp. 58-61 / Иванюк В.А. Нейронные сети и их анализ. Хроноэкономика, вып. 4, 2021 г., стр. 58-61.
3. Kachagina K. S., Safarova A. D. Neron Networks - Development Prospects. E-Scio, issue 2, 2021, 10 p. (in Russian) / Качагина К.С., Сафарова А.Д. Нейронные сети - перспективы развития. E-Scio, вып. 2, 2021 г., 10 стр.
4. Akhtar Z., Foresti G. L. Face spoof attack recognition using discriminative image patches. Journal of Electrical and Computer Engineering, 2016, article id 4721849, 15 p.
5. Chernobrov A. How to cheat a neural network or what is an Adversarial attack (in Russian) / Чернобров А. Как обмануть нейросеть или что такое Adversarial attack. Available at: https://chernobrovov.ru/articles/kak-obmanut-nejroset-ili-chto-takoe-adversarial-attack.html, accessed: 02.04.2023.
6. Goodfellow I.J., Shlens J., Szegedy C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014, 11 p.
7. Kurakin, A., Goodfellow, I., & Bengio, S. (2016). Adversarial examples in the physical world. arXiv preprint arXiv:1607. 2016, 14 p.
8. Wiyatno R., Xu A. Maximal Jacobian-based Saliency Map Attack. arXiv preprint arXiv:1808.07945, 2018, 5 p
9. Moosavi-Dezfool S.-M., Fawzi A., Frossard P. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2574-2582
10. Madry A., Makelov A. et al. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017, 28 p.
11. Papernot N., McDaniel P. et al. The limitations of deep learning in adversarial settings. In Proc. of the IEEE European Symposium on Security and Privacy (EuroS&P), 2016, pp. 372-387.
12. Carlini N., Wagner D. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proc. of the 10th ACM Workshop on Artificial Intelligence and Security, 2017, pp. 3-14.
13. Akhtar N., Mian A. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, vol. 6, 2018, pp. 14410-14430.
14. Xiao C., Deng R. et al. Characterizing adversarial examples based on spatial consistency information for semantic segmentation. Lecture Notes in Computer Science, vol. 11214, 2018, pp. 220-237.
15. Xie C., Wang J. et al. Adversarial examples for semantic segmentation and object detection. In Proc. of the IEEE International Conference on Computer Vision, 2017, pp. 1369-1378.
16. Samangouei P., Kabkab M., Chellappa R. Defense-gan: protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1795.06605, 2018, 18 p.
17. Xu H., Ma Y. et al. Adversarial Attacks and Defenses in Images, Graphs and Text: A Review. International Journal of Automation and Computing, vol. 17, issue 2, 2020, pp. 151-178.
18. Xu W., Evans D., Qi Y. Feature squeezing: Detecting adversarial examples in deep neural networks. In Proc. of the Network and Distributed Systems Security (NDSS) Symposium, 2018, 15 p
19. Wang Y., Kang L. et al. Fingerprint presentation attack detection using convolutional neural network with transfer learning. IEEE Access, vol. 7, 2019, pp. 131443-131451.
20. Nguyen T.M., Kim K.H. et al. Generative adversarial network-based face presentation attack detection using partial convolution and multi-domain learning. IEEE Transactions on Information Forensics and Security, vol. 14, issue 10, 2019, pp. 2764-2779.
21. Li X., Chen T., Yang J.. Adversarial fingerprint attacks and defenses. IEEE Transactions on Information Forensics and Security, vol. 14, issue 1, 2019, pp. 66-80.
22. Tan H., Li H., et al. Deep learning based liveness detection: A survey. ACM Computing Surveys, vol. 52, issue 3, (2019), pp. 1-27.
23. The official website of the NumPy Library. Available at: Available: https://numpy.org, accessed 30.03.2023.
24. Examples of neural networks’ implementation (in Russian) / Примеры реализации нейронных сетей. Available at: Available: https://webtort.ru, accessed 30.03.2023.
Review
For citations:
KOTLYAROV D.V., DYUDYUN G.D., RZHEVSKAYA N.V., LAPINA M.A., BABENKO M. Investigation of Adversarial Attacks on Pattern Recognition Neural Networks. Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS). 2023;35(2):35-48. https://doi.org/10.15514/ISPRAS-2023-35(2)-3