TY - JOUR
T1 - Adversarial attack and defence on handwritten Chinese character recognition
AU - Jiang, Guoteng
AU - Qian, Zhuang
AU - Wang, Qiu Feng
AU - Wei, Yan
AU - Huang, Kaizhu
N1 - Funding Information:
The work was funded by National Natural Science Foundation of China under no.61876154 and no.61876155; Jiangsu Science and Technology Programme (Natural Science Foundation of Jiangsu Province) under no. BE2020006-4; and Key Program Special Fund in XJTLU under no. KSF-T-06.
Publisher Copyright:
© Published under licence by IOP Publishing Ltd.
PY - 2022/6/1
Y1 - 2022/6/1
N2 - Deep Neural Networks (DNNs) have shown their powerful performance in classification; however, the robustness issue of DNNs has arisen as one primary concern, e.g., adversarial attack. So far as we know, there is not any reported work about the adversarial attack on handwritten Chinese character recognition (HCCR). To this end, the classical adversarial attack method (i.e., Projection Gradient Descent: PGD) is adopted to generate adversarial examples to evaluate the robustness of the HCCR model. Furthermore, in the training process, we use adversarial examples to improve the robustness of the HCCR model. In the experiments, we utilize a frequently-used DNN model on HCCR and evaluate its robustness on the benchmark dataset CASIA-HWDB. The experimental results show that its recognition accuracy is decreased severely on the adversarial examples, demonstrating the vulnerability of the current HCCR model. In addition, we can improve the recognition accuracy significantly after the adversarial training, demonstrating its effectiveness.
AB - Deep Neural Networks (DNNs) have shown their powerful performance in classification; however, the robustness issue of DNNs has arisen as one primary concern, e.g., adversarial attack. So far as we know, there is not any reported work about the adversarial attack on handwritten Chinese character recognition (HCCR). To this end, the classical adversarial attack method (i.e., Projection Gradient Descent: PGD) is adopted to generate adversarial examples to evaluate the robustness of the HCCR model. Furthermore, in the training process, we use adversarial examples to improve the robustness of the HCCR model. In the experiments, we utilize a frequently-used DNN model on HCCR and evaluate its robustness on the benchmark dataset CASIA-HWDB. The experimental results show that its recognition accuracy is decreased severely on the adversarial examples, demonstrating the vulnerability of the current HCCR model. In addition, we can improve the recognition accuracy significantly after the adversarial training, demonstrating its effectiveness.
UR - http://www.scopus.com/inward/record.url?scp=85132044242&partnerID=8YFLogxK
U2 - 10.1088/1742-6596/2278/1/012023
DO - 10.1088/1742-6596/2278/1/012023
M3 - Article
AN - SCOPUS:85132044242
SN - 1742-6588
VL - 2278
JO - Journal of Physics: Conference Series
JF - Journal of Physics: Conference Series
IS - 1
M1 - 012023
T2 - 2022 6th International Conference on Machine Vision and Information Technology, CMVIT 2022
Y2 - 25 February 2022
ER -