TY - JOUR
T1 - A survey of robust adversarial training in pattern recognition
T2 - Fundamental, theory, and methodologies
AU - Qian, Zhuang
AU - Huang, Kaizhu
AU - Wang, Qiu Feng
AU - Zhang, Xu Yao
N1 - Funding Information:
This work has been supported by the National Key Research and Development Program under Grant No. 2018AAA0100400, the National Natural Science Foundation of China (NSFC) grants 61876155 and 61876154 , and Jiangsu Science and Technology Programme ( Natural Science Foundation of Jiangsu Province ) under no. BE2020006-4 .
Publisher Copyright:
© 2022 Elsevier Ltd
PY - 2022/11
Y1 - 2022/11
N2 - Deep neural networks have achieved remarkable success in machine learning, computer vision, and pattern recognition in the last few decades. Recent studies, however, show that neural networks (both shallow and deep) may be easily fooled by certain imperceptibly perturbed input samples called adversarial examples. Such security vulnerability has resulted in a large body of research in recent years because real-world threats could be introduced due to the vast applications of neural networks. To address the robustness issue to adversarial examples particularly in pattern recognition, robust adversarial training has become one mainstream. Various ideas, methods, and applications have boomed in the field. Yet, a deep understanding of adversarial training including characteristics, interpretations, theories, and connections among different models has remained elusive. This paper presents a comprehensive survey trying to offer a systematic and structured investigation on robust adversarial training in pattern recognition. We start with fundamentals including definition, notations, and properties of adversarial examples. We then introduce a general theoretical framework with gradient regularization for defending against adversarial samples - robust adversarial training with visualizations and interpretations on why adversarial training can lead to model robustness. Connections will also be established between adversarial training and other traditional learning theories. After that, we summarize, review, and discuss various methodologies with defense/training algorithms in a structured way. Finally, we present analysis, outlook, and remarks on adversarial training.
AB - Deep neural networks have achieved remarkable success in machine learning, computer vision, and pattern recognition in the last few decades. Recent studies, however, show that neural networks (both shallow and deep) may be easily fooled by certain imperceptibly perturbed input samples called adversarial examples. Such security vulnerability has resulted in a large body of research in recent years because real-world threats could be introduced due to the vast applications of neural networks. To address the robustness issue to adversarial examples particularly in pattern recognition, robust adversarial training has become one mainstream. Various ideas, methods, and applications have boomed in the field. Yet, a deep understanding of adversarial training including characteristics, interpretations, theories, and connections among different models has remained elusive. This paper presents a comprehensive survey trying to offer a systematic and structured investigation on robust adversarial training in pattern recognition. We start with fundamentals including definition, notations, and properties of adversarial examples. We then introduce a general theoretical framework with gradient regularization for defending against adversarial samples - robust adversarial training with visualizations and interpretations on why adversarial training can lead to model robustness. Connections will also be established between adversarial training and other traditional learning theories. After that, we summarize, review, and discuss various methodologies with defense/training algorithms in a structured way. Finally, we present analysis, outlook, and remarks on adversarial training.
KW - Adversarial examples
KW - Adversarial training
KW - Robust learning
UR - http://www.scopus.com/inward/record.url?scp=85133846792&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2022.108889
DO - 10.1016/j.patcog.2022.108889
M3 - Article
AN - SCOPUS:85133846792
SN - 0031-3203
VL - 131
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 108889
ER -