Proceedings of the 2024 2nd International Conference on Image, Algorithms and Artificial Intelligence (ICIAAI 2024)

Effectiveness Evaluation of Black-Box Data Poisoning Attack on Machine Learning Models

Authors
Junjing Zhan1, *, Zhongxing Zhang2, Ke Zhou3
1School of Computing, Beijing Institute of Technology, Zhuhai, Guangdong, 519088, China
2Software College, Taiyuan University of Technology, Jinzhong, Shanxi, 030600, China
3School of Software, Tianjin Chengjian University, Xiqing District, Tianjin, 300192, China
*Corresponding author. Email: 210021101610@bitzh.edu.cn
Corresponding Author
Junjing Zhan
Available Online 16 October 2024.
DOI
10.2991/978-94-6463-540-9_68How to use a DOI?
Keywords
Poisoning Attack; Machine Learning; Black Box Attack
Abstract

With machine learning has been widely used in face recognition, natural speech processing, automatic driving and medical systems, attacks against machine learning are also accompanied, which may bring serious safety risks to biometric certification systems or automobiles. Incorrect classification of malicious parking signs. Therefore, the security and privacy of machine learning become more and more prominent with its application. Data poisoning attack targets on machine learning models, which contaminates the data and makes machine learning get wrong results, thus bringing potential safety hazards. In this paper, a poison attack strategy for black box machine learning model is adopted to carry out black box attack. In the experiment, the data poisoning attack on the machine learning model is successfully carried out. Indicates that an attacker has successfully resulted in targeted misclassification of samples. The purpose of this paper is to explore the security threats that may exist in existing machine learning algorithms, and further study the defense measures to improve the security of algorithms and prevent malicious users and attackers from tampering with the training data and input samples of the model or stealing the model parameters, resulting in the damage to the confidentiality, usability and integrity of the model.

Copyright
© 2024 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 2024 2nd International Conference on Image, Algorithms and Artificial Intelligence (ICIAAI 2024)
Series
Advances in Computer Science Research
Publication Date
16 October 2024
ISBN
978-94-6463-540-9
ISSN
2352-538X
DOI
10.2991/978-94-6463-540-9_68How to use a DOI?
Copyright
© 2024 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Junjing Zhan
AU  - Zhongxing Zhang
AU  - Ke Zhou
PY  - 2024
DA  - 2024/10/16
TI  - Effectiveness Evaluation of Black-Box Data Poisoning Attack on Machine Learning Models
BT  - Proceedings of the 2024 2nd International Conference on Image, Algorithms and Artificial Intelligence (ICIAAI 2024)
PB  - Atlantis Press
SP  - 668
EP  - 676
SN  - 2352-538X
UR  - https://doi.org/10.2991/978-94-6463-540-9_68
DO  - 10.2991/978-94-6463-540-9_68
ID  - Zhan2024
ER  -