TY - JOUR
T1 - Attacking Sequential Learning Models with Style Transfer Based Adversarial Examples
AU - Zhang, Zhilu
AU - Yang, Xi
AU - Huang, Kaizhu
N1 - Funding Information:
The work was partially supported by the following: National Natural Science Foundation of China under no.61876155; Jiangsu Science and Technology Programme (Natural Science Foundation of Jiangsu Province) under no.BK20181189, BK20181190, BE2020006-4; Key Program Special Fund in XJTLU under no. KSF-T-06, KSF-E-26, and KSF-A-10;
Publisher Copyright:
© Published under licence by IOP Publishing Ltd.
PY - 2021/4/27
Y1 - 2021/4/27
N2 - In the field of deep neural network security, it has been recently found that non-sequential networks are vulnerable to adversarial examples. There are however few studies to investigate the adversarial attack on sequential tasks. To this end, in this paper, we propose a novel method to generate adversarial examples for sequential tasks. Specifically, an image style transfer method is used to generate for a Scene Text Recognition (STR) network adversarial examples, which are only different from the original image on the style. While they will not interfere with the recognition of image information by human vision, the adversarial examples would significantly mislead the recognition results of sequential networks. Moreover, based on a black-box attack, both in digital and physical environments, we show that the proposed method can use cross text shape information and attack successfully the TPS-ResNet-BiLSTM-Attention (TRBA) and Convolutional Recurrent Neural Network (CRNN) models. Finally, we demonstrate further that physical adversarial examples can easily mislead commercial recognition algorithms, e.g. iFLYTEK and Youdao, suggesting that STR models are also highly vulnerable to attacks from adversarial examples.
AB - In the field of deep neural network security, it has been recently found that non-sequential networks are vulnerable to adversarial examples. There are however few studies to investigate the adversarial attack on sequential tasks. To this end, in this paper, we propose a novel method to generate adversarial examples for sequential tasks. Specifically, an image style transfer method is used to generate for a Scene Text Recognition (STR) network adversarial examples, which are only different from the original image on the style. While they will not interfere with the recognition of image information by human vision, the adversarial examples would significantly mislead the recognition results of sequential networks. Moreover, based on a black-box attack, both in digital and physical environments, we show that the proposed method can use cross text shape information and attack successfully the TPS-ResNet-BiLSTM-Attention (TRBA) and Convolutional Recurrent Neural Network (CRNN) models. Finally, we demonstrate further that physical adversarial examples can easily mislead commercial recognition algorithms, e.g. iFLYTEK and Youdao, suggesting that STR models are also highly vulnerable to attacks from adversarial examples.
UR - http://www.scopus.com/inward/record.url?scp=85105453494&partnerID=8YFLogxK
U2 - 10.1088/1742-6596/1880/1/012021
DO - 10.1088/1742-6596/1880/1/012021
M3 - Conference article
AN - SCOPUS:85105453494
SN - 1742-6588
VL - 1880
JO - Journal of Physics: Conference Series
JF - Journal of Physics: Conference Series
IS - 1
M1 - 012021
T2 - 5th International Conference on Machine Vision and Information Technology, CMVIT 2021
Y2 - 26 February 2021
ER -