TY - JOUR
T1 - Invisible Threats in the Data
T2 - A Study on Data Poisoning Attacks in Deep Generative Models
AU - Yang, Ziying
AU - Zhang, Jie
AU - Wang, Wei
AU - Li, Huan
N1 - Publisher Copyright:
© 2024 by the authors.
PY - 2024/10
Y1 - 2024/10
N2 - Deep Generative Models (DGMs), as a state-of-the-art technology in the field of artificial intelligence, find extensive applications across various domains. However, their security concerns have increasingly gained prominence, particularly with regard to invisible backdoor attacks. Currently, most backdoor attack methods rely on visible backdoor triggers that are easily detectable and defendable against. Although some studies have explored invisible backdoor attacks, they often require parameter modifications and additions to the model generator, resulting in practical inconveniences. In this study, we aim to overcome these limitations by proposing a novel method for invisible backdoor attacks. We employ an encoder–decoder network to ‘poison’ the data during the preparation stage without modifying the model itself. Through meticulous design, the trigger remains visually undetectable, substantially enhancing attacker stealthiness and success rates. Consequently, this attack method poses a serious threat to the security of DGMs while presenting new challenges for security mechanisms. Therefore, we urge researchers to intensify their investigations into DGM security issues and collaboratively promote the healthy development of DGM security.
AB - Deep Generative Models (DGMs), as a state-of-the-art technology in the field of artificial intelligence, find extensive applications across various domains. However, their security concerns have increasingly gained prominence, particularly with regard to invisible backdoor attacks. Currently, most backdoor attack methods rely on visible backdoor triggers that are easily detectable and defendable against. Although some studies have explored invisible backdoor attacks, they often require parameter modifications and additions to the model generator, resulting in practical inconveniences. In this study, we aim to overcome these limitations by proposing a novel method for invisible backdoor attacks. We employ an encoder–decoder network to ‘poison’ the data during the preparation stage without modifying the model itself. Through meticulous design, the trigger remains visually undetectable, substantially enhancing attacker stealthiness and success rates. Consequently, this attack method poses a serious threat to the security of DGMs while presenting new challenges for security mechanisms. Therefore, we urge researchers to intensify their investigations into DGM security issues and collaboratively promote the healthy development of DGM security.
KW - backdoor attack
KW - data poisoning
KW - deep generative models
KW - invisible trigger
UR - http://www.scopus.com/inward/record.url?scp=85206580854&partnerID=8YFLogxK
U2 - 10.3390/app14198742
DO - 10.3390/app14198742
M3 - Article
AN - SCOPUS:85206580854
SN - 2076-3417
VL - 14
JO - Applied Sciences (Switzerland)
JF - Applied Sciences (Switzerland)
IS - 19
M1 - 8742
ER -