Invisible Threats in the Data: A Study on Data Poisoning Attacks in Deep Generative Models

Ziying Yang, Jie Zhang*, Wei Wang, Huan Li

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Deep Generative Models (DGMs), as a state-of-the-art technology in the field of artificial intelligence, find extensive applications across various domains. However, their security concerns have increasingly gained prominence, particularly with regard to invisible backdoor attacks. Currently, most backdoor attack methods rely on visible backdoor triggers that are easily detectable and defendable against. Although some studies have explored invisible backdoor attacks, they often require parameter modifications and additions to the model generator, resulting in practical inconveniences. In this study, we aim to overcome these limitations by proposing a novel method for invisible backdoor attacks. We employ an encoder–decoder network to ‘poison’ the data during the preparation stage without modifying the model itself. Through meticulous design, the trigger remains visually undetectable, substantially enhancing attacker stealthiness and success rates. Consequently, this attack method poses a serious threat to the security of DGMs while presenting new challenges for security mechanisms. Therefore, we urge researchers to intensify their investigations into DGM security issues and collaboratively promote the healthy development of DGM security.

Original languageEnglish
Article number8742
JournalApplied Sciences (Switzerland)
Volume14
Issue number19
DOIs
Publication statusPublished - Oct 2024

Keywords

  • backdoor attack
  • data poisoning
  • deep generative models
  • invisible trigger

Fingerprint

Dive into the research topics of 'Invisible Threats in the Data: A Study on Data Poisoning Attacks in Deep Generative Models'. Together they form a unique fingerprint.

Cite this