Proceedings of the 2023 International Conference on Image, Algorithms and Artificial Intelligence (ICIAAI 2023)

The Investigation on Adversarial Attacks of Adversarial Samples Generated by Filter Effects

Authors
Qincheng Yang1, *, Jianing Yao2
1International E-Commerce and Law, Beijing University of Posts and Telecommunications, Beijing, 100876, China
2Computer Science and Technology, Zhejiang University of Technology, Shaoxing, 312030, China
*Corresponding author. Email: 2020213359@bupt.edu.cn
Corresponding Author
Qincheng Yang
Available Online 27 November 2023.
DOI
10.2991/978-94-6463-300-9_64How to use a DOI?
Keywords
Computer Vision; Adversarial Attack; Filter Effects
Abstract

In contemporary times, there has been a growing inclination among individuals to engage in photography and employ uncomplicated filters to enhance their visual outputs. Although these seemingly straightforward and aesthetically enhanced images are favored by many, they can inadvertently lead to erroneous interpretations by computer vision systems. Such misinterpretations often arise due to the presence of imperceptible image noise, which remains undetectable to the human eye. In this paper, we aim to add some filter effects to the image to verify the effectiveness of the classification results of the interference model, conduct black-box disturbance attacks on the model, and generate adversarial attack samples. For specific anti-attack implementation, we will use the following algorithms to filter the image, among which the contrast and brightness of the image are improved using the histogram equalization procedure; the blur filter algorithm is used to reduce the noise, texture or details in the image to make it more blurred; Utilize the sharpening algorithm to improve the image's edges and features for a crisper, sharper appearance; through the smoothing algorithm to make the image look smoother; through the edge enhancement algorithm to make it clearer. We will use the classic CNNs model to conduct experiments on two datasets of similar size and number but with large differences in image content. The final experimental findings demonstrate that filter interference does affect the model’s categorization outcomes.

Copyright
© 2023 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Download article (PDF)

Volume Title
Proceedings of the 2023 International Conference on Image, Algorithms and Artificial Intelligence (ICIAAI 2023)
Series
Advances in Computer Science Research
Publication Date
27 November 2023
ISBN
10.2991/978-94-6463-300-9_64
ISSN
2352-538X
DOI
10.2991/978-94-6463-300-9_64How to use a DOI?
Copyright
© 2023 The Author(s)
Open Access
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite this article

TY  - CONF
AU  - Qincheng Yang
AU  - Jianing Yao
PY  - 2023
DA  - 2023/11/27
TI  - The Investigation on Adversarial Attacks of Adversarial Samples Generated by Filter Effects
BT  - Proceedings of the 2023 International Conference on Image, Algorithms and Artificial Intelligence (ICIAAI 2023)
PB  - Atlantis Press
SP  - 618
EP  - 628
SN  - 2352-538X
UR  - https://doi.org/10.2991/978-94-6463-300-9_64
DO  - 10.2991/978-94-6463-300-9_64
ID  - Yang2023
ER  -