Modeling Scenarios of Destructive Impact on the Integrity of Machine Learning Models
https://doi.org/10.15514/ISPRAS-2025-37(3)-4
Abstract
The article is devoted to the development of models of destructive impact on the integrity of machine learning models based on SIR forecasting of the scale of threats and risks of losses under various scenarios of computer attacks. The article presents an original model of information security threats to technical components of artificial intelligence in the context of heterogeneous mass computer attacks, displaying vulnerabilities and methods of possible enemy actions. The authors have developed a methodology for adapting modernized SIR models of natural epidemics to identify similarities and analogues in the nature of the spread of destructive failures in AI systems caused by heterogeneous mass and targeted impacts. The identified patterns made it possible to assess the risks of possible damage to integrity and develop effective strategies for preventing and correcting distortions of machine learning models.
About the Authors
Artem Bakytzhanovich MENISOVRussian Federation
Cand. Sci. (Tech.), Senior lecturer of the Department of Information Collection and Processing Systems at the Mozhaysky Military Space Academy. Research interests: building trusted artificial intelligence systems, using machine learning for information security tasks.
Alexander Grigorievich LOMAKO
Russian Federation
Dr. Sci. (Tech.), Professor, Professor of the Department of Information Collection and Processing Systems at the Mozhaysky Military Space Academy. His research interests include the areas of theoretical and system programming, modeling of intelligent behavior of cybernetic systems in application to information security problems.
References
1. Qin Y. et al. Artificial intelligence and economic development: An evolutionary investigation and systematic review //Journal of the Knowledge Economy. – 2024. – vol. 15. – №. 1. – p. 1736-1770.
2. Менисов А. Б., Ломако А. Г., Сабиров Т. Р. Метод тестирования лингвистических моделей машинного обучения текстовыми состязательными примерами //Научно-технический вестник информационных технологий, механики и оптики. – 2023. – т. 23. – №. 5. – с. 946-954.
3. Papagianni A. et al. Frugal and Robust AI for Defence Advanced Intelligence //Paradigms on Technology Development for Security Practitioners. – Cham: Springer Nature Switzerland, 2024. – p. 427-437.
4. Weng Y., Wu J. Leveraging artificial intelligence to enhance data security and combat cyber attacks //Journal of Artificial Intelligence General science (JAIGS) ISSN: 3006-4023. – 2024. – vol. 5. – №. 1. – p. 392-399.
5. Nguyen T. T. et al. Manipulating recommender systems: A survey of poisoning attacks and countermeasures //ACM Computing Surveys. – 2024. – vol. 57. – №. 1. – p. 1-39.
6. Rosenblatt M. et al. Data leakage inflates prediction performance in connectome-based machine learning models //Nature Communications. – 2024. – vol. 15. – №. 1. – p. 1829.
7. Kim S. et al. Propile: Probing privacy leakage in large language models //Advances in Neural Information Processing Systems. – 2024. – vol. 36.
8. Менисов, А. Б. Ландшафт угроз систем искусственного интеллекта: монография / А. Б. Менисов. — Москва: Ай Пи Ар Медиа, 2023. — 126 c.
9. Костогрызов А. И., Нистратов А. А. Анализ угроз злоумышленной модификации модели машинного обучения для систем с искусственным интеллектом //Вопросы кибербезопасности. – 2023. – №. 5. – с. 9.
Review
For citations:
MENISOV A.B., LOMAKO A.G. Modeling Scenarios of Destructive Impact on the Integrity of Machine Learning Models. Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS). 2025;37(3):59-68. (In Russ.) https://doi.org/10.15514/ISPRAS-2025-37(3)-4