Preview

Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS)

Advanced search

Adversarial Purification for No-Reference Image-Quality Metrics: Applicability Study and New Methods

https://doi.org/10.15514/ISPRAS-2025-37(3)-5

Abstract

Recently, the area of adversarial attacks on image quality metrics has begun to be explored, whereas the area of defences remains under-researched. In this study, we aim to cover that case and check the transferability of adversarial purification defences from image classifiers to IQA methods. In this paper, we apply several widespread attacks on IQA models and examine the success of the defences against them. The purification methodologies covered different preprocessing techniques, including geometrical transformations, compression, denoising, and modern neural network-based methods. Also, we address the challenge of assessing the efficacy of a defensive methodology by proposing ways to estimate output visual quality and the success of neutralizing attacks. We test defences against attacks on three IQA metrics – Linearity, MetaIQA and SPAQ.

About the Authors

Aleksandr Evgenevich GUSHCHIN
Lomonosov Moscow State University, Research Centre for Trusted Artificial Intelligence of ISP RAS
Russian Federation

Received his master degree in computer science from the Moscow State University in 2024. He is currently a postgraduate student at the MSU Graphics & Media Lab, and a researcher at the Research Centre for Trusted AI of ISP RAS. His research interests involve image and video processing, quality assessment, and machine learning. He is also a key contributor to the project analyzing video quality assessment methods, including their robustness to adversarial attacks.



Anastasia Vsevolodovna ANTSIFEROVA
Lomonosov Moscow State University, Research Centre for Trusted Artificial Intelligence of ISP RAS
Russian Federation

Received her master degree in computer science from Moscow State University in 2018. She is a postgraduate student at Moscow State University and a member of Video Group in MSU Graphics&Media Lab. She is also a researcher at the Research Centre for Trusted AI of ISP RAS. Her research interests involve video codecs analysis and optimization, stereoscopic video subjective quality assessment. She is one of the contributors to MSU Video Codec Comparison Project and to the 3D video quality measurement project.



Dmitriy Sergeevich VATOLIN
Lomonosov Moscow State University, Research Centre for Trusted Artificial Intelligence of ISP RAS
Russian Federation

Cand. Sci. (Phys.-Math.) from Moscow State University, head of the MSU Graphics & Media Lab and MSU AI Institute Video Analysis Lab, and a researcher at the Research Centre for Trusted AI of ISP RAS. His research interests include compression methods, video processing, 3D video techniques (depth from motion, focus and other cues, video matting, background restoration, high-quality stereo generation), as well as video quality assessment and robustness of modern video quality metrics. He is a key cofounder of the 3D video quality measurement project, his most known project is the annual MSU Video Codecs Comparison, that includes up to 25 modern codecs compared subjectively and objectively in several nominations with detailed 20000+ charts.



References

1. Duanmu, Z., Liu, W., Wang, Z., Wang, Z.: Quantifying visual image quality: A bayesian view. An-nual Review of Vision Science 7, 437–464 (2021).

2. Zvezdakova, A., Zvezdakov, S., Kulikov, D., Vatolin, D.: Hacking vmaf with video color and con-trast distortion. arXiv preprint arXiv:1907.04807 (2019).

3. Shumitskaya, E., Antsiferova, A., Vatolin, D.: Universal perturbation attack on differentiable no-reference image- and video-quality metrics (2022).

4. Shumitskaya, E., Antsiferova, A., Vatolin, D.: Towards adversarial robustness verification of no-reference image-and video-quality metrics. Computer Vision and Image Understanding 240, 103913 (2024).

5. Zhang, W., Li, D., Min, X., Zhai, G., Guo, G., Yang, X., Ma, K.: Perceptual attacks of no-reference image quality models with human-in-the-loop. Advances in Neural Information Processing Systems 35, 2916–2929 (2022).

6. Korhonen, J., You, J.: Adversarial attacks against blind image quality assessment models. Proceed-ings of the 2nd Workshop on Quality of Experience in Visual Multimedia Applications (2022), https://api.semanticscholar.org/CorpusID: 252546140 16 F. Author et al.

7. Luo, C., Lin, Q., Xie, W., Wu, B., Xie, J., Shen, L.: Frequency-driven imperceptible adversarial at-tack on semantic similarity. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 15315–15324 (June 2022).

8. Li, Z.: On vmaf’s property in the presence of image enhancement operations (2021).

9. Ghazanfari, S., Garg, S., Krishnamurthy, P., Khorrami, F., Araujo, A.: R-lpips: An adversarially ro-bust perceptual similarity metric. arXiv preprint arXiv:2307.15157 (2023).

10. Kettunen, M., Härkönen, E., Lehtinen, J.: E-lpips: robust perceptual image similarity via random transformation ensembles. arXiv preprint arXiv:1906.03973 (2019).

11. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I.J., Fergus, R.: Intriguing properties of neural networks. CoRR abs/1312.6199 (2013), https://api.semanticscholar.org/CorpusID:604334 (дата обращения 12.09.2024).

12. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks (2017).

13. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2015).

14. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world (2017).

15. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2019).

16. Zhao, Z., Liu, Z., Larson, M.: Adversarial color enhancement: Generating unrestricted adversarial im-ages by optimizing a color filter (2020).

17. Sang, Q., Zhang, H., Liu, L., Wu, X., Bovik, A.: On the generation of adversarial samples for image quality assessment. SSRN Electronic Journal (01 2022). https: //doi.org/10.2139/ssrn.4112969 (дата обращения 12.09.2024).

18. Ghildyal, A., Liu, F.: Attacking perceptual similarity metrics (2023).

19. Graese, A., Rozsa, A., Boult, T.E.: Assessing threat of adversarial examples on deep neural networks (2016).

20. Guo, C., Rana, M., Cisse, M., van der Maaten, L.: Countering adversarial images using input trans-formations (2018).

21. Das, N., Shanbhogue, M., Chen, S.T., Hohman, F., Chen, L., Kounavis, M.E., Chau, D.H.: Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression (2017).

22. Dziugaite, G.K., Ghahramani, Z., Roy, D.M.: A study of the effect of jpg compression on adversarial images (2016).

23. Xu, W., Evans, D., Qi, Y.: Feature squeezing: Detecting adversarial examples in deep neural net-works. CoRR abs/1704.01155 (2017), http://arxiv.org/abs/ 1704.01155 (дата обращения 12.09.2024).

24. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena 60, 259–268 (1992), https://api. semanticscholar.org/CorpusID:13133466.

25. Meng, D., Chen, H.: Magnet: a two-pronged defense against adversarial examples (2017).

26. Samangouei, P., Kabkab, M., Chellappa, R.: Defense-gan: Protecting classifiers against adversarial at-tacks using generative models (2018).

27. Nie, W., Guo, B., Huang, Y., Xiao, C., Vahdat, A., Anandkumar, A.: Diffusion models for adversar-ial purification (2022).

28. Nips 2017: Adversarial learning development set. https://www.kaggle.com/ datasets/google-brain/nips-2017-adversarial-learning-development-set (2017) (дата обращения 12.09.2024).

29. Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.: Boosting adversarial attacks with momen-tum. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 9185–9193 (2018).

30. On the generation of adversarial examples for image quality assessment. Visual Computer (2023). https://doi.org/10.1007/s00371-023-03019-1 (дата обращения 12.09.2024).

31. Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters 20, 209–212 (2013), https://api. semanticscholar.org/CorpusID:16892725.

32. Wang, Z., Simoncelli, E.: Maximum differentiation (mad) competition: A methodology for compar-ing computational models of perceptual quantities. Journal of vision 8, 8.1–13 (02 2008). https://doi.org/10.1167/8.12.8 (дата обращения 12.09.2024).

33. Antsiferova, A., Abud, K., Gushchin, A., Shumitskaya, E., Lavrushkin, S., Vatolin, D.: Comparing the robustness of modern no-reference image- and video-quality metrics to adversarial attacks (2024).

34. Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13(4), 600–612 (2004). https://doi.org/10.1109/TIP.2003.819861 (дата обращения 12.09.2024).

35. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE conference on computer vision and pat-tern recognition. pp. 586–595 (2018).

36. Ding, K., Ma, K., Wang, S., Simoncelli, E.P.: Image quality assessment: Unifying structure and tex-ture similarity. IEEE transactions on pattern analysis and machine intelligence 44(5), 2567–2581 (2020).

37. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H., Shao, L.: Multi-stage progres-sive image restoration. In: CVPR (2021) Adversarial purification for no-reference image-quality met-rics 17.

38. Wang, X., Xie, L., Dong, C., Shan, Y.: Real-esrgan: Training real-world blind superresolution with pure synthetic data. In: International Conference on Computer Vision Workshops (ICCVW).

39. Antsiferova, A., Abud, K., Gushchin, A., Shumitskaya, E., Lavrushkin, S., Vatolin, D.: Comparing the robustness of modern no-reference image- and video-quality metrics to adversarial attacks (2024).

40. Li, D., Jiang, T., Jiang, M.: Norm-in-norm loss with faster convergence and better performance for image quality assessment. In: Proceedings of the 28th ACM International Conference on Multimedia. pp. 789–797 (2020).

41. Zhu, H., Li, L., Wu, J., Dong, W., Shi, G.: Metaiqa: Deep meta-learning for noreference image quali-ty assessment. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog-nition. pp. 14143–14152 (2020).

42. Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photog-raphy. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 3677–3686 (2020).

43. Antsiferova, A., Lavrushkin, S., Smirnov, M., Gushchin, A., Vatolin, D., Kulikov, D.: Video com-pression dataset and benchmark of learning-based video-quality metrics. Advances in Neural Infor-mation Processing Systems 35, 13814–13825 (2022).

44. https://videoprocessing.ai/benchmarks/video-quality-metrics_nrm.html (дата обращения 12.09.2024).


Review

For citations:


GUSHCHIN A.E., ANTSIFEROVA A.V., VATOLIN D.S. Adversarial Purification for No-Reference Image-Quality Metrics: Applicability Study and New Methods. Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS). 2025;37(3):69-84. (In Russ.) https://doi.org/10.15514/ISPRAS-2025-37(3)-5



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2079-8156 (Print)
ISSN 2220-6426 (Online)