TY - GEN
T1 - SimpleGAN
T2 - 19th IEEE International Conference on Data Mining Workshops, ICDMW 2019
AU - Zhang, Shufei
AU - Qian, Zhuang
AU - Huang, Kaizhu
AU - Zhang, Rui
AU - Hussain, Amir
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - Generative Adversarial Networks (GANs) are powerful generative models, but usually suffer from hard training and poor generation. Due to complex data and generation distributions in high dimensional space, it is difficult to measure the departure of two distributions, which is however vital for training successful GANs. Previous methods try to alleviate this problem by choosing reasonable divergence metrics. Unlike previous methods, in this paper, we propose a novel method called SimpleGAN to tackle this problem: transform original complex distributions to simple ones in the low dimensional space while keeping information and then measure the departure of two simple distributions. This novel method offers a new direction to tackle the stability of GANs. Specifically, starting from maximization of the mutual information between variables in the original high dimensional space and low dimensional space, we eventually derive to optimize a much simplified version, i.e. the lower bound of the mutual information. For experiments, we implement our proposed method on different baselines i.e. traditional GAN, WGAN-GP and DCGAN for CIFAR-10 dataset. Our proposed method achieves obvious improvement over these baseline models.
AB - Generative Adversarial Networks (GANs) are powerful generative models, but usually suffer from hard training and poor generation. Due to complex data and generation distributions in high dimensional space, it is difficult to measure the departure of two distributions, which is however vital for training successful GANs. Previous methods try to alleviate this problem by choosing reasonable divergence metrics. Unlike previous methods, in this paper, we propose a novel method called SimpleGAN to tackle this problem: transform original complex distributions to simple ones in the low dimensional space while keeping information and then measure the departure of two simple distributions. This novel method offers a new direction to tackle the stability of GANs. Specifically, starting from maximization of the mutual information between variables in the original high dimensional space and low dimensional space, we eventually derive to optimize a much simplified version, i.e. the lower bound of the mutual information. For experiments, we implement our proposed method on different baselines i.e. traditional GAN, WGAN-GP and DCGAN for CIFAR-10 dataset. Our proposed method achieves obvious improvement over these baseline models.
KW - Adversarial training
KW - Deep learning
KW - Generative adversarial networks
KW - Information theory
KW - Variational inference
UR - http://www.scopus.com/inward/record.url?scp=85078770418&partnerID=8YFLogxK
U2 - 10.1109/ICDMW.2019.00132
DO - 10.1109/ICDMW.2019.00132
M3 - Conference Proceeding
AN - SCOPUS:85078770418
T3 - IEEE International Conference on Data Mining Workshops, ICDMW
SP - 905
EP - 910
BT - Proceedings - 19th IEEE International Conference on Data Mining Workshops, ICDMW 2019
A2 - Papapetrou, Panagiotis
A2 - Cheng, Xueqi
A2 - He, Qing
PB - IEEE Computer Society
Y2 - 8 November 2019 through 11 November 2019
ER -