Segmentation of retinal vessels in fundus images based on U-Net with self-calibrated convolutions and spatial attention modules

Yi Biao Rong, Yu Xiong, Chong Li, Ying Chen, Peiwei Wei, Chuliang Wei, Zhun Fan*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Automated and accurate segmentation of retinal vessels in fundus images is an important step for screening and diagnosing various ophthalmologic diseases. However, many factors, including the variations of vessels in color, shape and size, make this task become an intricate challenge. One kind of the most popular methods for vessel segmentation is U-Net based methods. However, in the U-Net based methods, the size of the convolution kernels is generally fixed. As a result, the receptive field for an individual convolution operation is single, which is not conducive to the segmentation of retinal vessels with various thicknesses. To overcome this problem, in this paper, we employed self-calibrated convolutions to replace the traditional convolutions for the U-Net, which can make the U-Net learn discriminative representations from different receptive fields. Besides, we proposed an improved spatial attention module, instead of using traditional convolutions, to connect the encoding part and decoding part of the U-Net, which can improve the ability of the U-Net to detect thin vessels. The proposed method has been tested on Digital Retinal Images for Vessel Extraction (DRIVE) database and Child Heart and Health Study in England Database (CHASE DB1). The metrics used to evaluate the performance of the proposed method are accuracy (ACC), sensitivity (SE), specificity (SP), F1-score (F1) and the area under the receiver operating characteristic curve (AUC). The ACC, SE, SP, F1 and AUC obtained by the proposed method are 0.9680, 0.8036, 0.9840, 0.8138 and 0.9840 respectively on DRIVE database, and 0.9756, 0.8118, 0.9867, 0.8068 and 0.9888 respectively on CHASE DB1, which are better than those obtained by the traditional U-Net (the ACC, SE, SP, F1 and AUC obtained by U-Net are 0.9646, 0.7895, 0.9814, 0.7963 and 0.9791 respectively on DRIVE database, and 0.9733, 0.7817, 0.9862, 0.7870 and 0.9810 respectively on CHASE DB1). The experimental results indicate that the proposed modifications in the U-Net are effective for vessel segmentation. [Figure not available: see fulltext.].

Original languageEnglish
Pages (from-to)1745-1755
Number of pages11
JournalMedical and Biological Engineering and Computing
Volume61
Issue number7
DOIs
Publication statusPublished - Jul 2023
Externally publishedYes

Keywords

  • Retinal vessel segmentation
  • Self-calibrated convolutions
  • Spatial attention modules

Fingerprint

Dive into the research topics of 'Segmentation of retinal vessels in fundus images based on U-Net with self-calibrated convolutions and spatial attention modules'. Together they form a unique fingerprint.

Cite this