TY - JOUR
T1 - Invisible Threats in the Data: A Study on Data Poisoning Attacks in Deep Generative Models
AU - Yang, Ziying
AU - Zhang, Jie
AU - Wang, Wei
AU - Li, Huan
PY - 2024/9/27
Y1 - 2024/9/27
N2 - Deep Generative Models (DGMs), as a state-of-the-art technology in the field of artificial intelligence, find extensive applications across various domains. However, their security concerns have increasingly gained prominence, particularly with regard to invisible backdoor attacks. Currently, most backdoor attack methods rely on visible backdoor triggers that are easily detectable and defendable against. Although some studies have explored invisible backdoor attacks, they often require parameter modifications and additions to the model generator, resulting in practical inconveniences. In this study, we aim to overcome these limitations by proposing a novel method for invisible backdoor attacks. We employ an encoder–decoder network to ‘poison’ the data during the preparation stage without modifying the model itself. Through meticulous design, the trigger remains visually undetectable, substantially enhancing attacker stealthiness and success rates. Consequently, this attack method poses a serious threat to the security of DGMs while presenting new challenges for security mechanisms. Therefore, we urge researchers to intensify their investigations into DGM security issues and collaboratively promote the healthy development of DGM security.
AB - Deep Generative Models (DGMs), as a state-of-the-art technology in the field of artificial intelligence, find extensive applications across various domains. However, their security concerns have increasingly gained prominence, particularly with regard to invisible backdoor attacks. Currently, most backdoor attack methods rely on visible backdoor triggers that are easily detectable and defendable against. Although some studies have explored invisible backdoor attacks, they often require parameter modifications and additions to the model generator, resulting in practical inconveniences. In this study, we aim to overcome these limitations by proposing a novel method for invisible backdoor attacks. We employ an encoder–decoder network to ‘poison’ the data during the preparation stage without modifying the model itself. Through meticulous design, the trigger remains visually undetectable, substantially enhancing attacker stealthiness and success rates. Consequently, this attack method poses a serious threat to the security of DGMs while presenting new challenges for security mechanisms. Therefore, we urge researchers to intensify their investigations into DGM security issues and collaboratively promote the healthy development of DGM security.
U2 - 10.3390/app14198742
DO - 10.3390/app14198742
M3 - Article
SN - 2076-3417
VL - 14
JO - Applied Sciences (Switzerland)
JF - Applied Sciences (Switzerland)
IS - 19
ER -