TY - GEN
T1 - Sparse-View CT Reconstruction Based on Dual-Domain Deep Learning
AU - Li, Lin
AU - Xiang, Yang
AU - Jiang, Chunyu
AU - Hu, Peiyu
AU - Xie, Yejuan
AU - Ji, Chengtao
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Computed tomography (CT) is an important medical imaging technique widely used in clinical diagnosis. Sparse-view CT is an effective technique for significantly reducing radiation doses in CT imaging, but it often results in severe artifacts when using traditional reconstruction algorithms such as filtered backprojection (FBP). To address this, we propose a dual-domain deep learning architecture for sparse-view CT reconstruction that reduces radiation dose while enhancing image quality. Based on the U-Net model, this method integrates image domain and sinogram domain information through an improved Domain Fusion Module (DFM), which allows early fusion of these features to tackle blurring and artifacts caused by sparse views. Unlike existing methods, we only extract a portion of features for dual-domain fusion while retaining some original features, balancing fusion and information retention. We also employ a Convolutional Block Attention Module (CBAM) in the DFM to prioritize relevant features and improve reconstruction quality. Experiments conducted on the publicly available Mayo2016 dataset demonstrate that our proposed model achieves superior reconstruction quality compared to other state-of-the-art approaches.
AB - Computed tomography (CT) is an important medical imaging technique widely used in clinical diagnosis. Sparse-view CT is an effective technique for significantly reducing radiation doses in CT imaging, but it often results in severe artifacts when using traditional reconstruction algorithms such as filtered backprojection (FBP). To address this, we propose a dual-domain deep learning architecture for sparse-view CT reconstruction that reduces radiation dose while enhancing image quality. Based on the U-Net model, this method integrates image domain and sinogram domain information through an improved Domain Fusion Module (DFM), which allows early fusion of these features to tackle blurring and artifacts caused by sparse views. Unlike existing methods, we only extract a portion of features for dual-domain fusion while retaining some original features, balancing fusion and information retention. We also employ a Convolutional Block Attention Module (CBAM) in the DFM to prioritize relevant features and improve reconstruction quality. Experiments conducted on the publicly available Mayo2016 dataset demonstrate that our proposed model achieves superior reconstruction quality compared to other state-of-the-art approaches.
KW - CT Reconstruction
KW - deep learning
KW - dual-domain
KW - Sparse-view
UR - https://www.scopus.com/pages/publications/105007756533
U2 - 10.1109/CSECS64665.2025.11009552
DO - 10.1109/CSECS64665.2025.11009552
M3 - Conference Proceeding
AN - SCOPUS:105007756533
T3 - CSECS 2025 - Proceedings of 2025 7th International Conference on Software Engineering and Computer Science
BT - CSECS 2025 - Proceedings of 2025 7th International Conference on Software Engineering and Computer Science
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 7th International Conference on Software Engineering and Computer Science, CSECS 2025
Y2 - 21 March 2025 through 23 March 2025
ER -