Retargeted Multi-View Feature Learning with Separate and Shared Subspace Uncovering

Guo Sen Xie*, Xiao Bo Jin, Zheng Zhang, Zhonghua Liu, Xiaowei Xue, Jiexin Pu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)

Abstract

Multi-view feature learning aims at improving the performances of learning tasks, by fusing various kinds of features (views), such as heterogeneous features and/or homogeneous features. Current leading multi-view feature learning approaches usually learn features in each view separately while not uncovering shared information from multiple views. In this paper, we propose a multi-view feature learning framework, which can simultaneously learn separate subspace for each view and shared subspace for all the views, respectively; specifically, the separate subspace for each view can preserve the particular information within this view, meanwhile, the shared subspace can capture feature correlation among multiple views. Both the particularity and communality are essential for classification. Furthermore, we relax the labels of training samples within the concatenated subspaces, thus resulting in the retargeted least square regression (LSR) classifier. The transformation matrices tailored for each subspace within the corresponding view and the label relaxed LSR classifier are jointly learned in a unified framework, based on an efficient alternative optimization manner. Extensive experiments on four benchmark data sets well demonstrate the superiority of the proposed method, which has led to better performances than compared counterpart methods.

Original languageEnglish
Article number8091111
Pages (from-to)24895-24907
Number of pages13
JournalIEEE Access
Volume5
DOIs
Publication statusPublished - 31 Oct 2017
Externally publishedYes

Keywords

  • Feature learning
  • feature fusion
  • multi-view
  • subspace learning

Cite this