Abstract
Human has well demonstrated its cognitive consistency over image transformations such as flipping and scaling. In order to learn from human’s visual perception consistency, researchers find out that convolutional neural network’s capacity of discernment can be further elevated via forcing the network to concentrate on certain area in the picture in accordance with the human natural visual perception. Attention heatmap, as a supplementary tool to reveal the essential region that the network chooses to focus on, has been developed and widely adopted by CNNs. Based on this regime of visual consistency, we propose a novel end-to-end trainable CNN architecture with multi-scale attention consistency. Specifically, our model takes an original picture and its flipped counterpart as inputs, and then send them into a single standard Resnet with additional attention-enhanced modules to generate a semantically strong attention heatmap. We also compute the distance between multi-scale attention heatmaps of these two pictures and take it as an additional loss to help the network achieve better performance. Our network shows superiority on the multi-label classification task and attains compelling results on the WIDER Attribute Dataset.
Original language | English |
---|---|
Title of host publication | International Conference on Neural Information Processing (ICONIP), 2020 |
Editors | Haiqin Yang, Kitsuchart Pasupa, Andrew Chi-Sing Leung, James T. Kwok, Jonathan H. Chan, Irwin King |
Pages | 815-823 |
Number of pages | 9 |
DOIs | |
Publication status | Published - 2020 |
Event | 27th International Conference on Neural Information Processing, ICONIP 2020 - Bangkok, Thailand Duration: 18 Nov 2020 → 22 Nov 2020 |
Conference
Conference | 27th International Conference on Neural Information Processing, ICONIP 2020 |
---|---|
Country/Territory | Thailand |
City | Bangkok |
Period | 18/11/20 → 22/11/20 |
Keywords
- Attention
- Consistency
- Image classification
- Multi-label learning