TY - JOUR
T1 - Win-Diff
T2 - classifier-guided diffusion model for CT image windowing
AU - Liew, Yee Zhing
AU - PP Abdul Majeed, Anwar
AU - Tan, Andrew Huey Ping
AU - Lim, Chee Shen
AU - Nguyen, Anh
AU - Paoletti, Paolo
AU - Chen, Wei
N1 - Publisher Copyright:
© 2025 IOP Publishing Ltd. All rights, including for text and data mining, AI training, and similar technologies, are reserved.
PY - 2025/6/30
Y1 - 2025/6/30
N2 - Windowing is a critical preprocessing step in CT images, where high-bit-depth CT images are mapped to lower-bit-depth formats to enhance the visualisation of anatomical structures for radiologists and medical deep learning diagnostic models. However, due to variations in CT machine settings and patient-specific imaging requirements, manual adjustment of windowing parameters is always needed. This paper introduces Win-Diff, a novel classifier-guided diffusion model for CT image windowing, aimed at reducing manual effort and improving diagnostic accuracy, particularly for nodule detection. Unlike traditional approaches that predict windowing parameters such as Window Width (WW) and Window Level (WL) using convolutional neural networks (CNNs), Win-Diff directly generates task-optimised windowed images through a Diffusion U-Net architecture. A classifier head is seamlessly integrated into the diffusion process to guide image generation and optimise it for visual clarity and downstream diagnostic tasks. To balance the accurate reconstruction of windowed images with task-specific optimisation, we design a combined loss function incorporating reconstruction fidelity and classification performance. We evaluate Win-Diff on the nodule classification task using the Medical Segmentation Decathlon (MSD) lung dataset. Our results demonstrate that Win-Diff performs better than baseline methods, yielding an accuracy improvement. Furthermore, the structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) of Win-Diff-generated images outperform baseline methods, while its loss convergence is significantly faster.
AB - Windowing is a critical preprocessing step in CT images, where high-bit-depth CT images are mapped to lower-bit-depth formats to enhance the visualisation of anatomical structures for radiologists and medical deep learning diagnostic models. However, due to variations in CT machine settings and patient-specific imaging requirements, manual adjustment of windowing parameters is always needed. This paper introduces Win-Diff, a novel classifier-guided diffusion model for CT image windowing, aimed at reducing manual effort and improving diagnostic accuracy, particularly for nodule detection. Unlike traditional approaches that predict windowing parameters such as Window Width (WW) and Window Level (WL) using convolutional neural networks (CNNs), Win-Diff directly generates task-optimised windowed images through a Diffusion U-Net architecture. A classifier head is seamlessly integrated into the diffusion process to guide image generation and optimise it for visual clarity and downstream diagnostic tasks. To balance the accurate reconstruction of windowed images with task-specific optimisation, we design a combined loss function incorporating reconstruction fidelity and classification performance. We evaluate Win-Diff on the nodule classification task using the Medical Segmentation Decathlon (MSD) lung dataset. Our results demonstrate that Win-Diff performs better than baseline methods, yielding an accuracy improvement. Furthermore, the structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) of Win-Diff-generated images outperform baseline methods, while its loss convergence is significantly faster.
KW - CT images
KW - diffusion model
KW - windowing
UR - http://www.scopus.com/inward/record.url?scp=105006702574&partnerID=8YFLogxK
U2 - 10.1088/2631-8695/add98a
DO - 10.1088/2631-8695/add98a
M3 - Article
AN - SCOPUS:105006702574
SN - 2631-8695
VL - 7
JO - Engineering Research Express
JF - Engineering Research Express
IS - 2
M1 - 025267
ER -