TY - JOUR
T1 - BViT: broad attention based vision transformer
AU - Li, Nannan
AU - Chen, Yaran
AU - Li, Weifan
AU - Ding, Zixiang
AU - Zhao, Dongbin
AU - Nie, Shuai
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2024
Y1 - 2024
N2 - Recent works have demonstrated that transformer can achieve promising performance in computer vision, by exploiting the relationship among image patches with self-attention. They only consider the attention in a single feature layer, but ignore the complementarity of attention in different layers. In this article, we propose broad attention to improve the performance by incorporating the attention relationship of different layers for vision transformer (ViT), which is called BViT. The broad attention is implemented by broad connection and parameter-free attention. Broad connection of each transformer layer promotes the transmission and integration of information for BViT. Without introducing additional trainable parameters, parameter-free attention jointly focuses on the already available attention information in different layers for extracting useful information and building their relationship. Experiments on image classification tasks demonstrate that BViT delivers superior accuracy of 75.0%/81.6% top-1 accuracy on ImageNet with 5M/22M parameters. Moreover, we transfer BViT to downstream object recognition benchmarks to achieve 98.9% and 89.9% on CIFAR10 and CIFAR100, respectively, that exceed ViT with fewer parameters. For the generalization test, the broad attention in Swin Transformer, T2T-ViT and LVT also brings an improvement of more than 1%. To sum up, broad attention is promising to promote the performance of attention-based models. Code and pretrained models are available at https://github.com/DRL/BViT.
AB - Recent works have demonstrated that transformer can achieve promising performance in computer vision, by exploiting the relationship among image patches with self-attention. They only consider the attention in a single feature layer, but ignore the complementarity of attention in different layers. In this article, we propose broad attention to improve the performance by incorporating the attention relationship of different layers for vision transformer (ViT), which is called BViT. The broad attention is implemented by broad connection and parameter-free attention. Broad connection of each transformer layer promotes the transmission and integration of information for BViT. Without introducing additional trainable parameters, parameter-free attention jointly focuses on the already available attention information in different layers for extracting useful information and building their relationship. Experiments on image classification tasks demonstrate that BViT delivers superior accuracy of 75.0%/81.6% top-1 accuracy on ImageNet with 5M/22M parameters. Moreover, we transfer BViT to downstream object recognition benchmarks to achieve 98.9% and 89.9% on CIFAR10 and CIFAR100, respectively, that exceed ViT with fewer parameters. For the generalization test, the broad attention in Swin Transformer, T2T-ViT and LVT also brings an improvement of more than 1%. To sum up, broad attention is promising to promote the performance of attention-based models. Code and pretrained models are available at https://github.com/DRL/BViT.
KW - Broad attention
KW - broad connection
KW - image classification
KW - parameter-free attention
KW - vision transformer (ViT)
UR - http://www.scopus.com/inward/record.url?scp=85159793175&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2023.3264730
DO - 10.1109/TNNLS.2023.3264730
M3 - Article
C2 - 37126636
AN - SCOPUS:85159793175
SN - 2162-237X
VL - 35
SP - 12772
EP - 12783
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 9
ER -