Preview

Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS)

Advanced search

Security Analysis of the Draft National Standard «Neural Network Algorithms in Protected Execution. Automatic Training of Neural Network Models on Small Samples in Classification Tasks»

https://doi.org/10.15514/ISPRAS-2023-35(6)-11

Abstract

We propose a membership inference attack against the neural classification algorithm from the draft national standard developed by the Omsk State Technical University under the auspices of the Technical Committee on Standardization «Artificial Intelligence» (TC 164). The attack allows us to determine whether the data were used for neural network training, and aimed at violating the confidentiality property of the training set. The results show that the protection mechanism of neural network classifiers described by the draft national standard does not provide the declared properties. The results were previously announced at Ruscrypto’2023 conference.

About the Authors

Grigory Borisovich MARSHALKO
Technical Committee for Standardization «Cryptographic Information Protection
Russian Federation

Expert, Technical committee for standardization "Cryptography and security mechanisms". Research interests: information security, cryptography, biometric identification.



Roman Alexandrovich ROMANENKOV
Technical Committee for Standardization «Cryptographic Information Protection
Russian Federation

Expert, Technical committee for standardization "Cryptography and security mechanisms". Research interests: information security, cryptography, modeling of random variables, applied mathematical statistics.



Julia Anatolievna TRUFANOVA
Technical Committee for Standardization «Cryptographic Information Protection
Russian Federation

Expert, Technical committee for standardization "Cryptography and security mechanisms". Research interests: information security, biometric identification, machine learning.



References

1. Гуселев А.М., Маршалко Г.Б., «Проблемы безопасности систем машинного обучения», в сборнике трудов МИТСОБИ’2021, стр. 23-24.

2. Первая редакция проекта национального стандарта «Искусственный интеллект. Нейросетевые алгоритмы в защищенном исполнении. Автоматическое обучение нейросетевых моделей на малых выборках в задачах классификации», Технический комитет по стандартизации «Искусственный интеллект» (ТК 164), 2022.

3. Сулавко, А.Е. «Защищенный режим исполнения искусственного интеллекта на базе автоматически обучаемых сетей автокорреляционных нейронов», Технический отчет, ОмГТУ, Омск, 2021, 101 c.

4. R. Shokri, M. Stronati, C. Song, V. Shmatikov, Membership Inference Attacks against Machine Learning Models, 2017 IEEE Symposium on security and privacy (SP), pp. 3-18, 2017.

5. Manisha, N. Kumar, Cancelable biometrics: a comprehensive survey, Artificial intelligence review, 53, 3403-3446, 2020.


Review

For citations:


MARSHALKO G.B., ROMANENKOV R.A., TRUFANOVA J.A. Security Analysis of the Draft National Standard «Neural Network Algorithms in Protected Execution. Automatic Training of Neural Network Models on Small Samples in Classification Tasks». Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS). 2023;35(6):179-188. (In Russ.) https://doi.org/10.15514/ISPRAS-2023-35(6)-11



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2079-8156 (Print)
ISSN 2220-6426 (Online)