Beyond attributes: High-order attribute features for zero-shot learning

Xiao Bo Jin, Guo Sen Xie, Kaizhu Huang, Jianyu Miao, Qiufeng Wang

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

2 Citations (Scopus)

Abstract

In this paper, SeeNet with the high-order attribute features (SeeNet-HAF) is proposed to solve the challenging zero-shot learning (ZSL) task. The high-order attribute features aims to discover a more elaborate, discriminative high-order semantic vector for each class and can distill the correlation between the class attributes embedding into modeling. SeeNet-HAF consists of two branches. The upper stream is capable of dynamically localizing some discriminative object region, and then the high-order attribute supervision is incorporated to characterize the relationship between the class attributes. Meanwhile, the bottom stream discovers complementary object regions by erasing its discovered regions from the feature maps. In addition, we propose a fast hyperparameter search strategy. It takes both the breadth and precision of the search into account. Experiments on four standard benchmark datasets demonstrate the superiority of the SeeNet-HAF framework.

Original languageEnglish
Title of host publicationProceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2953-2962
Number of pages10
ISBN (Electronic)9781728150239
DOIs
Publication statusPublished - Oct 2019
Event17th IEEE/CVF International Conference on Computer Vision Workshop, ICCVW 2019 - Seoul, Korea, Republic of
Duration: 27 Oct 201928 Oct 2019

Publication series

NameProceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019

Conference

Conference17th IEEE/CVF International Conference on Computer Vision Workshop, ICCVW 2019
Country/TerritoryKorea, Republic of
CitySeoul
Period27/10/1928/10/19

Keywords

  • Feature learning
  • High order attribute
  • Zero shot learning

Cite this