TY - GEN
T1 - Delving into Adversarial Robustness on Document Tampering Localization
AU - Shao, Huiru
AU - Qian, Zhuang
AU - Huang, Kaizhu
AU - Wang, Wei
AU - Huang, Xiaowei
AU - Wang, Qiufeng
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025
Y1 - 2025
N2 - Recent advances in document forgery techniques produce malicious yet nearly visually untraceable alterations, imposing a big challenge for document tampering localization (DTL). Despite significant recent progress, there has been surprisingly limited exploration of adversarial robustness in DTL. This paper presents the first effort to uncover the vulnerability of most existing DTL models to adversarial attacks, highlighting the need for greater attention within the DTL community. In pursuit of robust DTL, we demonstrate that adversarial training can promote the model’s robustness and effectively protect against adversarial attacks. As a notable advancement, we further introduce a latent manifold adversarial training approach that enhances adversarial robustness in DTL by incorporating perturbations on the latent manifold of adversarial examples, rather than exclusively relying on label-guided information. Extensive experiments on DTL benchmark datasets show the necessity of adversarial training and our proposed manifold-based method significantly improves the adversarial robustness on both white-box and black-box attacks. Codes will be available at https://github.com/SHR-77/DTL-ARob.git.
AB - Recent advances in document forgery techniques produce malicious yet nearly visually untraceable alterations, imposing a big challenge for document tampering localization (DTL). Despite significant recent progress, there has been surprisingly limited exploration of adversarial robustness in DTL. This paper presents the first effort to uncover the vulnerability of most existing DTL models to adversarial attacks, highlighting the need for greater attention within the DTL community. In pursuit of robust DTL, we demonstrate that adversarial training can promote the model’s robustness and effectively protect against adversarial attacks. As a notable advancement, we further introduce a latent manifold adversarial training approach that enhances adversarial robustness in DTL by incorporating perturbations on the latent manifold of adversarial examples, rather than exclusively relying on label-guided information. Extensive experiments on DTL benchmark datasets show the necessity of adversarial training and our proposed manifold-based method significantly improves the adversarial robustness on both white-box and black-box attacks. Codes will be available at https://github.com/SHR-77/DTL-ARob.git.
KW - Adversarial robustness
KW - Document forgery localization
KW - Forensic
UR - http://www.scopus.com/inward/record.url?scp=85210469357&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-73650-6_17
DO - 10.1007/978-3-031-73650-6_17
M3 - Conference Proceeding
AN - SCOPUS:85210469357
SN - 9783031736490
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 290
EP - 306
BT - Computer Vision – ECCV 2024 - 18th European Conference, Proceedings
A2 - Leonardis, Aleš
A2 - Ricci, Elisa
A2 - Roth, Stefan
A2 - Russakovsky, Olga
A2 - Sattler, Torsten
A2 - Varol, Gül
PB - Springer Science and Business Media Deutschland GmbH
T2 - 18th European Conference on Computer Vision, ECCV 2024
Y2 - 29 September 2024 through 4 October 2024
ER -