Dense Attention: A Densely Connected Attention Mechanism for Vision Transformer

Nannan Li, Yaran Chen*, Dongbin Zhao

*Corresponding author for this work

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

3 Citations (Scopus)

Abstract

Recently, Vision Transformer has demonstrated its impressive capability in image understanding. The multi-head self-attention mechanism is fundamental to its formidable performance. However, self-attention has the drawback of high computational effort, which makes the training of the model require powerful computational resources or more time. This paper designs a novel and efficient attention mechanism Dense Attention to overcome the above problem. Dense attention aims to focus on features from multiple views through a dense connection paradigm. Benefiting from the attention of comprehensive features, dense attention can i) remarkably strengthen the image representation of the model, and ii) partially replace the multi-head self-attention mechanism to allow model slimming. To verify the effectiveness of dense attention, we implement it in the prevalent Vision Transformer models, including non-pyramid architecture DeiT and pyramid architecture Swin Transformer. The experimental results on ImageNet classification show that dense attention indeed contributes to performance improvement, +1.8/1.3% for DeiT-T/s and +0.7/!!+1.2% for Swin-T/s, respectively. Dense attention also demonstrates its transferability on CIFAR10 and CIFAR100 recognition benchmarks with classification accuracy of 98.9% and 89.6% respectively. Furthermore, dense attention can weaken the performance sacrifice caused by the pruning in the number of heads. Code and pre-trained models will be available11https://github.com/koala719/Dense-ViT.

Original languageEnglish
Title of host publicationIJCNN 2023 - International Joint Conference on Neural Networks, Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665488679
DOIs
Publication statusPublished - 2023
Event2023 International Joint Conference on Neural Networks, IJCNN 2023 - Gold Coast, Australia
Duration: 18 Jun 202323 Jun 2023

Publication series

NameProceedings of the International Joint Conference on Neural Networks
Volume2023-June

Conference

Conference2023 International Joint Conference on Neural Networks, IJCNN 2023
Country/TerritoryAustralia
CityGold Coast
Period18/06/2323/06/23

Keywords

  • Dense connection
  • image classification
  • model slimming
  • pyramid
  • vision transformer

Fingerprint

Dive into the research topics of 'Dense Attention: A Densely Connected Attention Mechanism for Vision Transformer'. Together they form a unique fingerprint.

Cite this