Preview

Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS)

Advanced search

Types of Attacks on Federated Neural Networks and Methods of Protection

https://doi.org/10.15514/ISPRAS-2024-36(1)-3

Abstract

Federated learning is a technology for privacy-preserving learning in distributed storage systems. This training allows you to create a general forecasting model, storing all the data in your storage systems. Several devices take part in training the general model, and each device has its own unique data on which the neural network is trained. The interaction of devices occurs only to adjust the weights of the general model. After which, the updated model is transmitted to all devices. Training on multiple devices creates many attack opportunities against this type of network. After training on a local device, model data is sent via some type of communication to a central server or global model. Therefore, vulnerabilities in a federated network are possible not only at the training stage on a separate device, but also at the data exchange stage. All this together increases the number of possible vulnerabilities of federated neural networks. As is known, not only neural networks, but also other models can be used to build federated classifiers. Therefore, the types of attacks directly on the network also depend on the type of model used. Federated neural networks are a rather complex design, different from neural networks and other classifiers, which can be vulnerable to various types of attacks because training occurs on different devices, and both neural networks and simpler algorithms can be used. In addition, it is necessary to ensure data transfer between devices. All attacks come down to several main types that exploit classifier vulnerabilities. It is possible to implement protection against attacks by improving the architecture of the classifier itself and paying attention to data encryption.

About the Authors

Valery Alekseevich KOSTENKO
Lomonosov Moscow State University
Russian Federation

Cand. Sci. (Tech.), Associate Professor of the Department of Automation of Computer Complex Systems, Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University. Research interests: scheduling theory, machine learning methods, real-time computing systems, scheduling calculations in a data center.



Alisa Evgenievna SELEZNEVA
Lomonosov Moscow State University
Russian Federation

Student of the Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University. Research interests: machine learning models and methods, federated neural networks, scheduling calculations in a data center.



References

1. Kairouz, Peter; Brendan McMahan, H.; Avent, Brendan; Bellet, Aurélien; Bennis, Mehdi; Arjun Nitin Bhagoji; Bonawitz, Keith; Charles, Zachary; Cormode, Graham; Cummings, Rachel; D'Oliveira, Rafael G. L.; Salim El Rouayheb; Evans, David; Gardner, Josh; Garrett, Zachary; Gascón, Adrià; Ghazi, Badih; Gibbons, Phillip B.; Gruteser, Marco; Harchaoui, Zaid; He, Chaoyang; He, Lie; Huo, Zhouyuan; Hutchinson, Ben; Hsu, Justin; Jaggi, Martin; Javidi, Tara; Joshi, Gauri; Khodak, Mikhail; et al. (10 December 2019). "Advances and Open Problems in Federated Learning". arXiv:1912.04977, DOI: 10.48550/arXiv.1912.04977.

2. Federated Learning: Collaborative Machine Learning without Centralized Training Data (online) https://ai.googleblog.com/2017/04/federated-learning-collaborative.html — 01.12.2023.

3. Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives - Peng Liu, Xiangru Xu, Wen Wang, Cybersecurity 5, 4 (2022), DOI: 10.1186/s42400-021-00105-6.

4. Aniruddha Saha; Akshayvarun Subramanya; Hamed Pirsiavash; (2019), - “Hidden Trigger Backdoor Attacks” - arXiv:1910.00033v2, DOI: 10.48550/arXiv.1910.00033.

5. Bhagoji AN, Chakraborty S, Mittal P, Calo SB - Analyzing federated learning through an adversarial lens. In: Chaudhuri K, Salakhutdinov R (eds) Proceedings of the 36th international conference on machine learning, ICML 2019, 9–15 June 2019, Long Beach, California, USA, volume 97 of proceedings of machine learning research, pp 634–643, PMLR 97:634-643, 2019.

6. Chen, X., Liu, C., Li, B., Lu, K., and Song, D. Targeted backdoor attacks on deep learning systems using data poisoning, arXiv:1712.05526, 2017a, DOI: 10.48550/arXiv.1712.05526.

7. Clement Fung, Chris J. M. Yoon, and Ivan Beschastnikh. The limitations of federated learning in sybil settings. In 23rd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020), pp. 301–316, San Sebastian, October 2020. USENIX Association. ISBN 978-1-939133-18-2.

8. Naseri M, Hayes J, De Cristofaro E (2020) Toward robustness and privacy in federated learning: experimenting with local and central differential privacy. CoRR arXiv:2009.03561, DOI: 10.48550/arXiv.2009.03561

9. Abadi M, Chu A, Goodfellow I, McMahan HB, Mironov I, Talwar K, Zhang L (2016) Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp 308–318, DOI: 10.48550/arXiv.1607.00133.

10. Kostenko V.A., Tankaev I.R., Federated Learning Using Simple Voting Scheme; 2022 - ISSN 1060-992X

11. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R (2014) Intriguing properties of neural networks. In: Bengio Y, LeCun Y (eds) 2nd international conference on learning representations, ICLR 2014, Banff, AB, Canada, April 14–16, 2014, conference track proceedings, arXiv:1312.6199, DOI: 10.48550/arXiv.1312.6199.

12. Yao Chen1, Yijie Gui1, Hong Lin1, Wensheng Gan1,2∗, Yongdong Wu; Federated Learning Attacks and Defenses: A Survey – 2022; arXiv:2211.14952v1, DOI: 10.48550/arXiv.2211.14952.

13. JI Shou-Ling, Du Tian-Yu, Li Jin-Feng, Shen Chao, Li Bo - Security and privacy of machine learning models: a survey. Ruan Jian Xue Bao/J Softw 32(1):41–67, 2021, DOI: 10.13328/j.cnki.jos.006131.

14. Cohen JM, Rosenfeld E, Kolter JZ - Certified adversarial robustness via randomized smoothing. In: Chaudhuri K, Salakhutdinov R (eds) Proceedings of the 36th international conference on machine learning, ICML 2019, 9–15 June 2019, Long Beach, California, USA, volume 97 of proceedings of machine learning research. PMLR, pp 1310–1320, DOI: 10.48550/arXiv.1902.02918.

15. Nasr M, Shokri R, Houmansadr A (2019) Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: 2019 IEEE symposium on security and privacy (SP). IEEE, pp 739–753, DOI: 10.1109/SP.2019.00065.

16. Ren K, Meng QR, Yan SK - Survey of artificial intelligence data security and privacy protection. Chin J Netw Inf Secur 7(1):1–10, 2021, DOI: 10.11959/j.issn.2096-109x.2021001.

17. Jayaraman B, Evans D (2019) Evaluating differentially private machine learning in practice. In: Heninger N, Traynor P (eds) 28th USENIX security symposium, USENIX security 2019, Santa Clara, CA, USA, August 14–16, 2019. USENIX Association, pp 1895–1912, DOI: 10.48550/arXiv.1902.08874.

18. Melis L, Song C, De Cristofaro E, Shmatikov V (2019) Exploiting unintended feature leakage in collaborative learning. In: 2019 IEEE symposium on security and privacy (SP). IEEE, pp 691–706. DOI: 10.48550/arXiv.1805.04049.

19. Papernot N, McDaniel PD, Sinha A, Wellman MP (2018) Sok: security and privacy in machine learning. In: 2018 IEEE European symposium on security and privacy, EuroS&P 2018, London, United Kingdom, April 24–26, 2018. IEEE, pp 399–414. https://doi.org/10.1109/EuroSP.2018.00035

20. B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the gan: information leakage from collaborative deep learning,” in ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 603–618.

21. L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” Advances in Neural Information Processing Systems, vol. 32, 2019.

22. Geiping J, Bauermeister H, Dröge H, Moeller M (2020) Inverting gradients—how easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053, DOI: 10.48550/arXiv.2003.14053.

23. Fang H, Qian Q (2021) Privacy preserving machine learning with homomorphic encryption and federated learning. Future Internet 13(4):94, DOI: 10.3390/fi13040094.

24. Li Y, Zhou Y, Jolfaei A, Dongjin Y, Gaochao X, Zheng X (2020) Privacy-preserving federated learning framework based on chained secure multi-party computing. IEEE Internet Things J. DOI: 10.1109/JIOT.2020.3022911.

25. Lyu L, Yu H, Ma X, Sun L, Zhao J, Yang Q, Yu PS (2020) Privacy and robustness in federated learning: attacks and defenses. arXiv preprint arXiv:2012.06337, DOI: 10.48550/arXiv.2012.06337.


Review

For citations:


KOSTENKO V.A., SELEZNEVA A.E. Types of Attacks on Federated Neural Networks and Methods of Protection. Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS). 2024;36(1):35-44. (In Russ.) https://doi.org/10.15514/ISPRAS-2024-36(1)-3



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2079-8156 (Print)
ISSN 2220-6426 (Online)