Virtual mixup training for unsupervised domain adaptation

xudong mao, Yun Ma, Zhenguo Yang, Yangbin Chen, Qing Li

Research output: Contribution to conferencePaper

Abstract

We study the problem of unsupervised domain adaptation
which aims to adapt models trained on a labeled source do-
main to a completely unlabeled target domain. Recently, the
cluster assumption has been applied to unsupervised domain
adaptation and achieved strong performance. One critical fac-
tor in successful training of the cluster assumption is to im-
pose the locally-Lipschitz constraint to the model. Existing
methods only impose the locally-Lipschitz constraint around
the training points while miss the other areas, such as the
points in-between training data. In this paper, we address this
issue by encouraging the model to behave linearly in-between
training points. We propose a new regularization method
called Virtual Mixup Training (VMT), which is able to incor-
porate the locally-Lipschitz constraint to the areas in-between
training data. Unlike the traditional mixup model, our method
constructs the combination samples without using the label
information, allowing it to apply to unsupervised domain
adaptation. The proposed method is generic and can be com-
bined with most existing models such as the recent state-of-
the-art model called VADA. Extensive experiments demon-
strate that VMT significantly improves the performance of
VADA on six domain adaptation benchmark datasets. For the
challenging task of adapting MNIST to SVHN, VMT can im-
prove the accuracy of VADA by over 30%. Code is available
at https://github.com/xudonmao/VMT.
Original languageEnglish
Publication statusPublished - 2019
Externally publishedYes

Fingerprint

Dive into the research topics of 'Virtual mixup training for unsupervised domain adaptation'. Together they form a unique fingerprint.

Cite this