Learning latent features with infinite non-negative binary matrix tri-factorization

Xi Yang, Kaizhu Huang*, Rui Zhang, Amir Hussain

*Corresponding author for this work

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

3 Citations (Scopus)

Abstract

Non-negative Matrix Factorization (NMF) has been widely exploited to learn latent features from data. However, previous NMF models often assume a fixed number of features, say p features, where p is simply searched by experiments. Moreover, it is even difficult to learn binary features, since binary matrix involves more challenging optimization problems. In this paper, we propose a new Bayesian model called infinite non-negative binary matrix tri-factorizations model (iNBMT), capable of learning automatically the latent binary features as well as feature number based on Indian Buffet Process (IBP). Moreover, iNBMT engages a tri-factorization process that decomposes a nonnegative matrix into the product of three components including two binary matrices and a non-negative real matrix. Compared with traditional bi-factorization, the tri-factorization can better reveal the latent structures among items (samples) and attributes (features). Specifically, we impose an IBP prior on the two infinite binary matrices while a truncated Gaussian distribution is assumed on the weight matrix. To optimize the model, we develop an efficient modified maximization-expectation algorithm (MEalgorithm), with the iteration complexity one order lower than another recently-proposed Maximization-Expectation-IBP model [9]. We present the model definition, detail the optimization, and finally conduct a series of experiments. Experimental results demonstrate that our proposed iNBMT model significantly outperforms the other comparison algorithms in both synthetic and real data.

Original languageEnglish
Title of host publicationNeural Information Processing - 23rd International Conference, ICONIP 2016, Proceedings
EditorsKenji Doya, Kazushi Ikeda, Minho Lee, Akira Hirose, Seiichi Ozawa, Derong Liu
PublisherSpringer Verlag
Pages587-596
Number of pages10
ISBN (Print)9783319466866
DOIs
Publication statusPublished - 2016
Event23rd International Conference on Neural Information Processing, ICONIP 2016 - Kyoto, Japan
Duration: 16 Oct 201621 Oct 2016

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9947 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference23rd International Conference on Neural Information Processing, ICONIP 2016
Country/TerritoryJapan
CityKyoto
Period16/10/1621/10/16

Keywords

  • Indian buffet process prior
  • Infinite latent feature model
  • Infinite non-negative binary matrix tri-factorization

Cite this