TY - JOUR
T1 - Deep learning-based detection of depression by fusing auditory, visual and textual clues
AU - Xu, Chenyang
AU - Chen, Yangbin
AU - Tao, Yanbao
AU - Xie, Wanqing
AU - Liu, Xiaofeng
AU - Lin, Yunhan
AU - Liang, Chunfeng
AU - Du, Fan
AU - Zhi, Zhixiong
AU - Shi, Chuan
N1 - Publisher Copyright:
© 2025 The Authors
PY - 2025/12/15
Y1 - 2025/12/15
N2 - Background: Early detection of depression is crucial for implementing interventions. Deep learning-based computer vision (CV), semantic, and acoustic analysis have enabled the automated analysis of visual and auditory signals. Objective: We proposed an automated depression detection model based on artificial intelligence (AI) that integrated visual, auditory, and textual clues. Moreover, we validated the model's performance in multiple scenarios, including interviews with chatbot. Methods: A chatbot for depressive symptom inquiry powered by GPT-2.0 was developed. The brief affective interview task was designed as supplement. Audio-video and textual clues were captured during interview, and features from different modalities were fused using a multi-head cross-attention network. To validate the model's generalizability, we performed external validation with an independent dataset. Results: (1)In the internal validation set (152 depression patients and 118 healthy controls), the multimodal model demonstrated strong predictive power for depression in all scenarios, with an area under the curve (AUC) exceeding 0.950 and an accuracy over 0.930. Under the symptomatic interview by chatbot scenario, the model showed exceptional performance, achieving an AUC of 0.999. Specificity decreases slightly (0.883) in the Brief Affective Interview Task. The multimodal model outperformed unimodal and bimodal counterparts. (2)For external validation under the symptomatic interview by chatbot scenario, a geographically distinct dataset (55 depression patients and 45 healthy controls) was employed. The multimodal fusion model achieved an AUC of 0.978, though all modality combinations exhibited reduced performance compared to internal validation. Limitations: Longitudinal follow-up was not conducted in this study, and severe depression applicability requires further study.
AB - Background: Early detection of depression is crucial for implementing interventions. Deep learning-based computer vision (CV), semantic, and acoustic analysis have enabled the automated analysis of visual and auditory signals. Objective: We proposed an automated depression detection model based on artificial intelligence (AI) that integrated visual, auditory, and textual clues. Moreover, we validated the model's performance in multiple scenarios, including interviews with chatbot. Methods: A chatbot for depressive symptom inquiry powered by GPT-2.0 was developed. The brief affective interview task was designed as supplement. Audio-video and textual clues were captured during interview, and features from different modalities were fused using a multi-head cross-attention network. To validate the model's generalizability, we performed external validation with an independent dataset. Results: (1)In the internal validation set (152 depression patients and 118 healthy controls), the multimodal model demonstrated strong predictive power for depression in all scenarios, with an area under the curve (AUC) exceeding 0.950 and an accuracy over 0.930. Under the symptomatic interview by chatbot scenario, the model showed exceptional performance, achieving an AUC of 0.999. Specificity decreases slightly (0.883) in the Brief Affective Interview Task. The multimodal model outperformed unimodal and bimodal counterparts. (2)For external validation under the symptomatic interview by chatbot scenario, a geographically distinct dataset (55 depression patients and 45 healthy controls) was employed. The multimodal fusion model achieved an AUC of 0.978, though all modality combinations exhibited reduced performance compared to internal validation. Limitations: Longitudinal follow-up was not conducted in this study, and severe depression applicability requires further study.
KW - Artificial intelligence (AI)
KW - Computer vision (CV)
KW - Deep learning
KW - Depression
KW - Multi-modality
UR - https://www.scopus.com/pages/publications/105011644596
U2 - 10.1016/j.jad.2025.119860
DO - 10.1016/j.jad.2025.119860
M3 - Article
SN - 0165-0327
VL - 391
JO - Journal of Affective Disorders
JF - Journal of Affective Disorders
M1 - 119860
ER -