TY - GEN
T1 - Self-focus deep embedding model for coarse-grained zero-shot classification
AU - Yang, Guanyu
AU - Huang, Kaizhu
AU - Zhang, Rui
AU - Goulermas, John Y.
AU - Hussain, Amir
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2020.
PY - 2020
Y1 - 2020
N2 - Zero-shot learning (ZSL), i.e. classifying patterns where there is a lack of labeled training data, is a challenging yet important research topic. One of the most common ideas for ZSL is to map the data (e.g., images) and semantic attributes to the same embedding space. However, for coarse-grained classification tasks, the samples of each class tend to be unevenly distributed. This leads to the possibility of learned embedding function mapping the attributes to an inappropriate location, and hence limiting the classification performance. In this paper, we propose a novel regularized deep embedding model for ZSL in which a self-focus mechanism, is constructed to constrain the learning of the embedding function. During the training process, the distances of different dimensions in the embedding space will be focused conditioned on the class. Thereby, locations of the prototype mapped from the attributes can be adjusted according to the distribution of the samples for each class. Moreover, over-fitting of the embedding function to known classes will also be mitigated. A series of experiments on four commonly used zero-shot databases show that our proposed method can attain significant improvement in coarse-grained data sets.
AB - Zero-shot learning (ZSL), i.e. classifying patterns where there is a lack of labeled training data, is a challenging yet important research topic. One of the most common ideas for ZSL is to map the data (e.g., images) and semantic attributes to the same embedding space. However, for coarse-grained classification tasks, the samples of each class tend to be unevenly distributed. This leads to the possibility of learned embedding function mapping the attributes to an inappropriate location, and hence limiting the classification performance. In this paper, we propose a novel regularized deep embedding model for ZSL in which a self-focus mechanism, is constructed to constrain the learning of the embedding function. During the training process, the distances of different dimensions in the embedding space will be focused conditioned on the class. Thereby, locations of the prototype mapped from the attributes can be adjusted according to the distribution of the samples for each class. Moreover, over-fitting of the embedding function to known classes will also be mitigated. A series of experiments on four commonly used zero-shot databases show that our proposed method can attain significant improvement in coarse-grained data sets.
KW - Class-level over-fitting
KW - Coarse-grained
KW - Generalized Zero-Shot Learning
UR - http://www.scopus.com/inward/record.url?scp=85080948223&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-39431-8_2
DO - 10.1007/978-3-030-39431-8_2
M3 - Conference Proceeding
AN - SCOPUS:85080948223
SN - 9783030394301
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 12
EP - 22
BT - Advances in Brain Inspired Cognitive Systems - 10th International Conference, BICS 2019, Proceedings
A2 - Ren, Jinchang
A2 - Hussain, Amir
A2 - Zhao, Huimin
A2 - Cai, Jun
A2 - Chen, Rongjun
A2 - Xiao, Yinyin
A2 - Huang, Kaizhu
A2 - Zheng, Jiangbin
PB - Springer
T2 - 10th International Conference on Brain Inspired Cognitive Systems, BICS 2019
Y2 - 13 July 2019 through 14 July 2019
ER -