Skip to main navigation menu Skip to main content Skip to site footer

Information technologies and systems

February 14, 2025; Boston, USA: VII International Scientific and Practical Conference «SCIENTIFIC PRACTICE: MODERN AND CLASSICAL RESEARCH METHODS»


ABOUT DATA POISONING TECHNIQUES USED BY ATTACKERS IN NEURAL NETWORK TRAINING


DOI
https://doi.org/10.36074/logos-14.02.2025.040
Published
14.03.2025

Abstract

Currently tools based on artificial intelligence (hereinafter – AI) methods are being increasingly implemented into all the branches of people’s everyday life and also into many industrial applications. Most of such tools use artificial neural networks as the mathematical basis (particularly for image recognition). As is well known neural networks need prior training before one can use them practically, and to carry out such training, developers of the net must have a large dataset which describe real cases of modeled object's behavior. This process which may take weeks or even longer (for very large networks with millions of parameters) must be strictly controlled, particularly because such a dangerous attack as data poisoning can be carried out [1].

References

  1. Cinà, A. E., Grosse, K., Demontis, A., Biggio, B., Roli, F., & Pelillo, M. (2023). Machine learning security against data poisoning: Are we there yet? arXiv. https://arxiv.org/abs/2204.05986
  2. Sharma, S., Tripathi, R., Maurya, V. P., & Upadhyay, A. (2024). Invisible threats in the data: A study on data poisoning attacks in deep generative models. Applied Sciences, 14(19), 8742. https://doi.org/10.3390/app14198742