DSGA-Net: Deeply separable gated transformer and attention strategy for medical image segmentation network

Junding Sun, Jiuqiang Zhao, Xiaosheng Wu, Chaosheng Tang, Shuihua Wang, Yudong Zhang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

To address the problems of under-segmentation and over-segmentation of small organs in medical image segmentation. We present a novel medical image segmentation network model with Depth Separable Gating Transformer and a Three-branch Attention module (DSGA-Net). Firstly, the model adds a Depth Separable Gated Visual Transformer (DSG-ViT) module into its Encoder to enhance (i) the contextual links among global, local, and channels and (ii) the sensitivity to location information. Secondly, a Mixed Three-branch Attention (MTA) module is proposed to increase the number of features in the up-sampling process. Meanwhile, the loss of feature information is reduced when restoring the feature image to the original image size. By validating Synapse, BraTs2020, and ACDC public datasets, the Dice Similarity Coefficient (DSC) of the results of DSGA-Net reached 81.24%,85.82%, and 91.34%, respectively. Moreover, the Hausdorff Score (HD) decreased to 20.91% and 5.27% on the Synapse and BraTs2020. There are 10.78% and 0.69% decreases compared to the Baseline TransUNet. The experimental results indicate that DSGA-Net achieves better segmentation than most advanced methods.

Original languageEnglish
Article number101553
JournalJournal of King Saud University - Computer and Information Sciences
Volume35
Issue number5
DOIs
Publication statusPublished - May 2023
Externally publishedYes

Keywords

  • Depth separable
  • Gated attention mechanism
  • Medical image segmentation
  • Transformer

Fingerprint

Dive into the research topics of 'DSGA-Net: Deeply separable gated transformer and attention strategy for medical image segmentation network'. Together they form a unique fingerprint.

Cite this