TY - GEN
T1 - Generalized adversarial training in riemannian space
AU - Zhang, Shufei
AU - Huang, Kaizhu
AU - Zhang, Rui
AU - Hussain, Amir
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - Adversarial examples, referred to as augmented data points generated by imperceptible perturbations of input samples, have recently drawn much attention. Well-crafted adversarial examples may even mislead state-of-the-art deep neural network (DNN) models to make wrong predictions easily. To alleviate this problem, many studies have focused on investigating how adversarial examples can be generated and/or effectively handled. All existing works tackle this problem in the Euclidean space. In this paper, we extend the learning of adversarial examples to the more general Riemannian space over DNNs. The proposed work is important in that (1) it is a generalized learning methodology since Riemmanian space will be degraded to the Euclidean space in a special case; (2) it is the first work to tackle the adversarial example problem tractably through the perspective of Riemannian geometry; (3) from the perspective of geometry, our method leads to the steepest direction of the loss function, by considering the second order information of the loss function. We also provide a theoretical study showing that our proposed method can truly find the descent direction for the loss function, with a comparable computational time against traditional adversarial methods. Finally, the proposed framework demonstrates superior performance over traditional counterpart methods, using benchmark data including MNIST, CIFAR-10 and SVHN.
AB - Adversarial examples, referred to as augmented data points generated by imperceptible perturbations of input samples, have recently drawn much attention. Well-crafted adversarial examples may even mislead state-of-the-art deep neural network (DNN) models to make wrong predictions easily. To alleviate this problem, many studies have focused on investigating how adversarial examples can be generated and/or effectively handled. All existing works tackle this problem in the Euclidean space. In this paper, we extend the learning of adversarial examples to the more general Riemannian space over DNNs. The proposed work is important in that (1) it is a generalized learning methodology since Riemmanian space will be degraded to the Euclidean space in a special case; (2) it is the first work to tackle the adversarial example problem tractably through the perspective of Riemannian geometry; (3) from the perspective of geometry, our method leads to the steepest direction of the loss function, by considering the second order information of the loss function. We also provide a theoretical study showing that our proposed method can truly find the descent direction for the loss function, with a comparable computational time against traditional adversarial methods. Finally, the proposed framework demonstrates superior performance over traditional counterpart methods, using benchmark data including MNIST, CIFAR-10 and SVHN.
KW - Adversarial examples
KW - Adversarial training
KW - Deep neural network
KW - Regularization
KW - Riemannian manifold
UR - http://www.scopus.com/inward/record.url?scp=85078933614&partnerID=8YFLogxK
U2 - 10.1109/ICDM.2019.00093
DO - 10.1109/ICDM.2019.00093
M3 - Conference Proceeding
AN - SCOPUS:85078933614
T3 - Proceedings - IEEE International Conference on Data Mining, ICDM
SP - 826
EP - 835
BT - Proceedings - 19th IEEE International Conference on Data Mining, ICDM 2019
A2 - Wang, Jianyong
A2 - Shim, Kyuseok
A2 - Wu, Xindong
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 19th IEEE International Conference on Data Mining, ICDM 2019
Y2 - 8 November 2019 through 11 November 2019
ER -