TY - JOUR
T1 - Inter-feature Relationship Certifies Robust Generalization of Adversarial Training
AU - Zhang, Shufei
AU - Qian, Zhuang
AU - Huang, Kaizhu
AU - Wang, Qiu Feng
AU - Gu, Bin
AU - Xiong, Huan
AU - Yi, Xinping
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
PY - 2024
Y1 - 2024
N2 - Whilst adversarial training has been shown as a promising wisdom to promote model robustness in computer vision and machine learning, adversarially trained models often suffer from poor robust generalization on unseen adversarial examples. Namely, there still remains a big gap between the performance on training and test adversarial examples. In this paper, we propose to tackle this issue from a new perspective of the inter-feature relationship. Specifically, we aim to generate adversarial examples which maximize the loss function while maintaining the inter-feature relationship of natural data as well as penalizing the correlation distance between natural features and adversarial counterparts. As a key contribution, we prove that training with such examples while penalizing the distance between correlations can help promote both the generalization on natural and adversarial examples theoretically. We empirically validate our method through extensive experiments over different vision datasets (CIFAR-10, CIFAR-100, and SVHN), against several competitive methods. Our method substantially outperforms the baseline adversarial training by a large margin, especially for PGD20 on CIFAR-10, CIFAR-100, and SVHN with around 20%, 15% and 29% improvements.
AB - Whilst adversarial training has been shown as a promising wisdom to promote model robustness in computer vision and machine learning, adversarially trained models often suffer from poor robust generalization on unseen adversarial examples. Namely, there still remains a big gap between the performance on training and test adversarial examples. In this paper, we propose to tackle this issue from a new perspective of the inter-feature relationship. Specifically, we aim to generate adversarial examples which maximize the loss function while maintaining the inter-feature relationship of natural data as well as penalizing the correlation distance between natural features and adversarial counterparts. As a key contribution, we prove that training with such examples while penalizing the distance between correlations can help promote both the generalization on natural and adversarial examples theoretically. We empirically validate our method through extensive experiments over different vision datasets (CIFAR-10, CIFAR-100, and SVHN), against several competitive methods. Our method substantially outperforms the baseline adversarial training by a large margin, especially for PGD20 on CIFAR-10, CIFAR-100, and SVHN with around 20%, 15% and 29% improvements.
KW - Adversarial examples
KW - Adversarial training
KW - Robustness
UR - http://www.scopus.com/inward/record.url?scp=85196282516&partnerID=8YFLogxK
U2 - 10.1007/s11263-024-02111-w
DO - 10.1007/s11263-024-02111-w
M3 - Article
AN - SCOPUS:85196282516
SN - 0920-5691
JO - International Journal of Computer Vision
JF - International Journal of Computer Vision
ER -