Adversarial attack and defence on handwritten Chinese character recognition

Guoteng Jiang, Zhuang Qian, Qiu Feng Wang*, Yan Wei, Kaizhu Huang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


Deep Neural Networks (DNNs) have shown their powerful performance in classification; however, the robustness issue of DNNs has arisen as one primary concern, e.g., adversarial attack. So far as we know, there is not any reported work about the adversarial attack on handwritten Chinese character recognition (HCCR). To this end, the classical adversarial attack method (i.e., Projection Gradient Descent: PGD) is adopted to generate adversarial examples to evaluate the robustness of the HCCR model. Furthermore, in the training process, we use adversarial examples to improve the robustness of the HCCR model. In the experiments, we utilize a frequently-used DNN model on HCCR and evaluate its robustness on the benchmark dataset CASIA-HWDB. The experimental results show that its recognition accuracy is decreased severely on the adversarial examples, demonstrating the vulnerability of the current HCCR model. In addition, we can improve the recognition accuracy significantly after the adversarial training, demonstrating its effectiveness.

Original languageEnglish
Article number012023
JournalJournal of Physics: Conference Series
Issue number1
Publication statusPublished - 1 Jun 2022
Event2022 6th International Conference on Machine Vision and Information Technology, CMVIT 2022 - Virtual, Online
Duration: 25 Feb 2022 → …


Dive into the research topics of 'Adversarial attack and defence on handwritten Chinese character recognition'. Together they form a unique fingerprint.

Cite this