TY - GEN
T1 - Certifying Better Robust Generalization for Unsupervised Domain Adaptation
AU - Gao, Zhiqiang
AU - Zhang, Shufei
AU - Huang, Kaizhu
AU - Wang, Qiufeng
AU - Zhang, Rui
AU - Zhong, Chaoliang
N1 - Funding Information:
The work was partially supported by the following: National Natural Science Foundation of China under no.61876155 and 61876154; Jiangsu Science and Technology Programme (Natural Science Foundation of Jiangsu Province) under no.BE2020006-4; Key Program Special Fund in XJTLU under no.KSF-T-06.
Publisher Copyright:
© 2022 ACM.
PY - 2022/10/10
Y1 - 2022/10/10
N2 - Recent studies explore how to obtain adversarial robustness for unsupervised domain adaptation (UDA). These efforts are however dedicated to achieving an optimal trade-off between accuracy and robustness on a given or seen target domain but ignore the robust generalization issue over unseen adversarial data. Consequently, degraded performance will be often observed when existing robust UDAs are applied to future adversarial data. In this work, we make a first attempt to address the robust generalization issue of UDA. We conjecture that the poor robust generalization of present robust UDAs may be caused by the large distribution gap among adversarial examples. We then provide an empirical and theoretical analysis showing that this large distribution gap is mainly owing to the discrepancy between feature-shift distributions. To reduce such discrepancy, a novel Anchored Feature-Shift Regularization (AFSR) method is designed with a certificated robust generalization bound. We conduct a series of experiments on benchmark UDA datasets. Experimental results validate the effectiveness of our proposed AFSR over many existing robust UDA methods.
AB - Recent studies explore how to obtain adversarial robustness for unsupervised domain adaptation (UDA). These efforts are however dedicated to achieving an optimal trade-off between accuracy and robustness on a given or seen target domain but ignore the robust generalization issue over unseen adversarial data. Consequently, degraded performance will be often observed when existing robust UDAs are applied to future adversarial data. In this work, we make a first attempt to address the robust generalization issue of UDA. We conjecture that the poor robust generalization of present robust UDAs may be caused by the large distribution gap among adversarial examples. We then provide an empirical and theoretical analysis showing that this large distribution gap is mainly owing to the discrepancy between feature-shift distributions. To reduce such discrepancy, a novel Anchored Feature-Shift Regularization (AFSR) method is designed with a certificated robust generalization bound. We conduct a series of experiments on benchmark UDA datasets. Experimental results validate the effectiveness of our proposed AFSR over many existing robust UDA methods.
KW - adversarial robustness
KW - adversarial training
KW - robust generalization
KW - unsupervised domain adaptation
UR - http://www.scopus.com/inward/record.url?scp=85151151259&partnerID=8YFLogxK
U2 - 10.1145/3503161.3548323
DO - 10.1145/3503161.3548323
M3 - Conference Proceeding
AN - SCOPUS:85151151259
T3 - MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia
SP - 2399
EP - 2410
BT - MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia
PB - Association for Computing Machinery, Inc
T2 - 30th ACM International Conference on Multimedia, MM 2022
Y2 - 10 October 2022 through 14 October 2022
ER -