Preview

Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS)

Advanced search

SLAP – Simple Linear Attack for Perceptron

https://doi.org/10.15514/ISPRAS-2024-36(3)-6

Abstract

This article introduces a new approach to tricking perceptron based neural networks with piecewise linear activation functions using basic linear algebra. By formulating the attack as a system of linear equations and inequalities, it demonstrates a streamlined and computationally efficient approach to generating diverse sets of adversarial examples. The algorithms for the proposed attack have been implemented in code, that accessible in the open-source repository. The study highlights the formidable challenge posed by the proposed attack methodology for contemporary neural network defenses, emphasizing the pressing need for innovative defense strategies. Through a comprehensive exploration of adversarial vulnerabilities, this research contributes to the advancement of adversarial robustness in machine learning, paving the way for the development of more reliable and trustworthy artificial intelligence systems in real-world applications.

About the Author

Andrey Igorevich PERMINOV
Ivannikov Institute for System Programming of the Russian Academy of Sciences

Postgraduate student, researcher. His research interests include digital signal processing, neural network data processing, development of trusted models and machine learning algorithms and creation of artificial data.



References

1. Репозиторий SLAEAttack (ссылка недоступна при слепом просмотре).

2. Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., & Mukhopadhyay, D. (2021). A survey on adversarial attacks and defences. CAAI Transactions on Intelligence Technology, 6(1), 25-45.

3. Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.

4. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.

5. Croce, F., & Hein, M. (2019). Sparse and imperceivable adversarial attacks. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 4724-4732).

6. Wong, E., & Kolter, Z. (2018, July). Provable defenses against adversarial examples via the convex outer adversarial polytope. In International conference on machine learning (pp. 5286-5295). PMLR.

7. Cifar10 dataset, https://www.cs.toronto.edu/ kriz/cifar.html. Last accessed 12 Mar 2024

8. QP solvers, https://pypi.org/project/qpsolvers/. Last accessed 12 Mar 2024

9. Сats and Dogs Dataset, https://www.microsoft.com/enus/ download/details.aspx?id=54765. Last accessed 12 Mar 2024

10. Targ, S., Almeida, D., & Lyman, K. (2016). Resnet in resnet: Generalizing residual architectures. arXiv preprint arXiv:1603.08029.

11. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.

12. Tan, M., & Le, Q. (2019, May). Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (pp. 6105-6114). PMLR.


Review

For citations:


PERMINOV A.I. SLAP – Simple Linear Attack for Perceptron. Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS). 2024;36(3):83-92. (In Russ.) https://doi.org/10.15514/ISPRAS-2024-36(3)-6



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2079-8156 (Print)
ISSN 2220-6426 (Online)