Attacking Sequential Learning Models with Style Transfer Based Adversarial Examples

Zhilu Zhang, Xi Yang, Kaizhu Huang*

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

1 Citation (Scopus)

Abstract

In the field of deep neural network security, it has been recently found that non-sequential networks are vulnerable to adversarial examples. There are however few studies to investigate the adversarial attack on sequential tasks. To this end, in this paper, we propose a novel method to generate adversarial examples for sequential tasks. Specifically, an image style transfer method is used to generate for a Scene Text Recognition (STR) network adversarial examples, which are only different from the original image on the style. While they will not interfere with the recognition of image information by human vision, the adversarial examples would significantly mislead the recognition results of sequential networks. Moreover, based on a black-box attack, both in digital and physical environments, we show that the proposed method can use cross text shape information and attack successfully the TPS-ResNet-BiLSTM-Attention (TRBA) and Convolutional Recurrent Neural Network (CRNN) models. Finally, we demonstrate further that physical adversarial examples can easily mislead commercial recognition algorithms, e.g. iFLYTEK and Youdao, suggesting that STR models are also highly vulnerable to attacks from adversarial examples.

Original languageEnglish
Article number012021
JournalJournal of Physics: Conference Series
Volume1880
Issue number1
DOIs
Publication statusPublished - 27 Apr 2021
Event5th International Conference on Machine Vision and Information Technology, CMVIT 2021 - Virtual, Online
Duration: 26 Feb 2021 → …

Fingerprint

Dive into the research topics of 'Attacking Sequential Learning Models with Style Transfer Based Adversarial Examples'. Together they form a unique fingerprint.

Cite this