Mixing up real samples and adversarial samples for semi-supervised learning

Yun Ma, xudong mao, Yangbin Chen, Qing Li

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

3 Citations (Scopus)
Plum Print visual indicator of research metrics
  • Citations
    • Citation Indexes: 3
  • Captures
    • Readers: 10
see details

Abstract

Consistency regularization methods have shown great success in semi-supervised learning tasks. Most existing methods focus on either the local neighborhood or in-between neighborhood of training samples to enforce the consistency constraint. In this paper, we propose a novel generalized framework called Adversarial Mixup (AdvMixup), which unifies the local and in-between neighborhood approaches by defining a virtual data distribution along the paths between the training samples and adversarial samples. Experimental results on both synthetic data and benchmark datasets exhibit that our AdvMixup can achieve better performance and robustness than state-of-the-art methods for semi-supervised learning.
Original languageEnglish
Title of host publicationInternational Joint Conference on Neural Networks (IJCNN)
Pages1-8
DOIs
Publication statusPublished - 2020
Externally publishedYes

Fingerprint

Dive into the research topics of 'Mixing up real samples and adversarial samples for semi-supervised learning'. Together they form a unique fingerprint.

Cite this

Ma, Y., mao, X., Chen, Y., & Li, Q. (2020). Mixing up real samples and adversarial samples for semi-supervised learning. In International Joint Conference on Neural Networks (IJCNN) (pp. 1-8) https://doi.org/10.1109/IJCNN48605.2020.9207038