Preview

Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS)

Advanced search

Effect of transformations on the success of adversarial attacks for Clipped BagNet and ResNet image classifiers

https://doi.org/10.15514/ISPRAS-2022-34(6)-7

Abstract

Our paper compares the accuracy of the vanilla ResNet-18 model with the accuracy of the Clipped BagNet-33 and BagNet-33 models with adversarial learning under different conditions. We performed experiments on images attacked by the adversarial sticker under conditions of image transformations. The adversarial sticker is a small region of the attacked image, inside which the pixel values can be changed indefinitely, and this can generate errors in the model prediction. The transformations of the attacked images in this paper simulate the distortions that appear in the physical world when a change in perspective, scale or lighting changes the image. Our experiments show that models from the BagNet family perform poorly on images in low quality. We also analyzed the effects of different types of transformations on the models' robustness to adversarial attacks and the tolerance of these attacks.

About the Authors

Ekaterina Olegovna KURDENKOVA
Ivannikov Institute for System Programming of the Russian Academy of Sciences
Russian Federation

Graduate Research Trainee at ISP RAS Research Center for Trusted Artificial Intelligence



Maria Sergeevna CHEREPNINA
Technical University of Munich
Germany

Master’s Student



Anna Sergeevna CHISTYAKOVA
Ivannikov Institute for System Programming of the Russian Academy of Sciences, Lomonosov Moscow State University . .
Russian Federation

Bachelor’s Student of the faculty of the CMC of Moscow State University, Assistant at ISP RAS Research Center for Trusted Artificial Intelligence



Konstantin Vladimirovich ARKHIPENKO
Ivannikov Institute for System Programming of the Russian Academy of Sciences
Russian Federation

Junior Research Fellow at ISP RAS Research Center for Trusted Artificial Intelligence



References

1. Goodfellow I.J., Shlens J., Szegedy C. Explaining and Harnessing Adversarial Examples. ArXiv 1412.6572, 2014, 11 p.

2. Madry A., Makelov A. et al. Towards Deep Learning Models Resistant to Adversarial Attacks. ArXiv 1706.06083, 2017, 28 p.

3. Brown T.B., Mané D. et al. Adversarial Patch. ArXiv 1712.09665, 2017, 6 p.

4. Brendel W., Bethge M. Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet. ArXiv 1904.00760, 2019, 13 p.

5. Zhang Z., Yuan B. et al. Clipped BagNet: Defending Against Sticker Attacks with Clipped Bag-of-features. In Proc. of the 2020 IEEE Security and Privacy Workshops (SPW), 2020, pp. 55-61.

6. Xiang C., Bhagoji A.N. et al. PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields. ArXiv 2005.10884, 2020, 23 p.

7. Dong Y., Liao F. et al. Boosting Adversarial Attacks with Momentum. In Proc. of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 9185-9193.

8. Athalye A., Engstrom L. et al. Synthesizing Robust Adversarial Examples. ArXiv 1707.07397, 2017, 19 p.

9. Nicolae M., Sinn M. et al. Adversarial Robustness Toolbox v1.0.0. ArXiv, abs/1807.01069, 2018, 34 p.

10. Russakovsky O., Deng J. et al. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, vol. 115, issue 3, 2015, pp. 211-252.

11. Uesato J., O'Donoghue B. et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks. ArXiv, abs/1802.05666, 2018, 13 p.


Review

For citations:


KURDENKOVA E.O., CHEREPNINA M.S., CHISTYAKOVA A.S., ARKHIPENKO K.V. Effect of transformations on the success of adversarial attacks for Clipped BagNet and ResNet image classifiers. Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS). 2022;34(6):101-116. (In Russ.) https://doi.org/10.15514/ISPRAS-2022-34(6)-7



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2079-8156 (Print)
ISSN 2220-6426 (Online)