TY - GEN
T1 - Towards Better Robustness Against Natural Corruptions in Document Tampering Localization
AU - Shao, Huiru
AU - Huang, Kaizhu
AU - Wang, Wei
AU - Huang, Xiaowei
AU - Wang, Qiufeng
N1 - Publisher Copyright:
© 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2025/4/11
Y1 - 2025/4/11
N2 - Marvelous advances have been exhibited in recent document tampering localization (DTL) systems. However, confronted with corrupted tampered document images, their vulnerability is fatal in real-world scenarios. While robustness against adversarial attack has been extensively studied by adversarial training (AT), the robustness on natural corruptions remains under-explored for DTL. In this paper, to overcome forensic dependency, we propose the adversarial forensic regularization (AFR) based on min-max optimization to improve robustness. Specifically, we adopt mutual information (MI) to represent forensic dependency between two random variable over tampered and authentic pixels spaces, where the MI can be approximated by Jensen-Shannon-Divergence (JSD) with empirical sampling. To further enable a trade-off between predictive representations in clean tampered document pixels and robust ones in corrupted pixels, an additional regularization term is formulated with divergence between clean and perturbed pixels distribution (DDR). Following min-max optimization framework, our method can also work well against adversarial attacks. To evaluate our proposed method, we collect a dataset (i.e., TSorie-CRP) for evaluating robustness against natural corruptions in real scenarios. Extensive experiments demonstrate the effectiveness of our method against natural corruptions. Without any surprise, our method also achieves good performance against adversarial attack on DTL benchmark datasets.
AB - Marvelous advances have been exhibited in recent document tampering localization (DTL) systems. However, confronted with corrupted tampered document images, their vulnerability is fatal in real-world scenarios. While robustness against adversarial attack has been extensively studied by adversarial training (AT), the robustness on natural corruptions remains under-explored for DTL. In this paper, to overcome forensic dependency, we propose the adversarial forensic regularization (AFR) based on min-max optimization to improve robustness. Specifically, we adopt mutual information (MI) to represent forensic dependency between two random variable over tampered and authentic pixels spaces, where the MI can be approximated by Jensen-Shannon-Divergence (JSD) with empirical sampling. To further enable a trade-off between predictive representations in clean tampered document pixels and robust ones in corrupted pixels, an additional regularization term is formulated with divergence between clean and perturbed pixels distribution (DDR). Following min-max optimization framework, our method can also work well against adversarial attacks. To evaluate our proposed method, we collect a dataset (i.e., TSorie-CRP) for evaluating robustness against natural corruptions in real scenarios. Extensive experiments demonstrate the effectiveness of our method against natural corruptions. Without any surprise, our method also achieves good performance against adversarial attack on DTL benchmark datasets.
UR - http://www.scopus.com/inward/record.url?scp=105003910543&partnerID=8YFLogxK
U2 - 10.1609/aaai.v39i1.32052
DO - 10.1609/aaai.v39i1.32052
M3 - Conference Proceeding
AN - SCOPUS:105003910543
T3 - Proceedings of the AAAI Conference on Artificial Intelligence
SP - 703
EP - 710
BT - Special Track on AI Alignment
A2 - Walsh, Toby
A2 - Shah, Julie
A2 - Kolter, Zico
PB - Association for the Advancement of Artificial Intelligence
T2 - 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025
Y2 - 25 February 2025 through 4 March 2025
ER -