TY - JOUR
T1 - Diagnosis of COVID-19 via acoustic analysis and artificial intelligence by monitoring breath sounds on smartphones
AU - Chen, Zhiang
AU - Li, Muyun
AU - Wang, Ruoyu
AU - Sun, Wenzhuo
AU - Liu, Jiayi
AU - Li, Haiyang
AU - Wang, Tianxin
AU - Lian, Yuan
AU - Zhang, Jiaqian
AU - Wang, Xinheng
N1 - Publisher Copyright:
© 2022 Elsevier Inc.
PY - 2022/6
Y1 - 2022/6
N2 - Scientific evidence shows that acoustic analysis could be an indicator for diagnosing COVID-19. From analyzing recorded breath sounds on smartphones, it is discovered that patients with COVID-19 have different patterns in both the time domain and frequency domain. These patterns are used in this paper to diagnose the infection of COVID-19. Statistics of the sound signals, analysis in the frequency domain, and Mel-Frequency Cepstral Coefficients (MFCCs) are then calculated and applied in two classifiers, k-Nearest Neighbors (kNN) and Convolutional Neural Network (CNN), to diagnose whether a user is contracted with COVID-19 or not. Test results show that, amazingly, an accuracy of over 97% could be achieved with a CNN classifier and more than 85% on kNN with optimized features. Optimization methods for selecting the best features and using various metrics to evaluate the performance are also demonstrated in this paper. Owing to the high accuracy of the CNN model, the CNN model was implemented in an Android app to diagnose COVID-19 with a probability to indicate the confidence level. The initial medical test shows a similar test result between the method proposed in this paper and the lateral flow method, which indicates that the proposed method is feasible and effective. Because of the use of breath sound and tested on the smartphone, this method could be used by everybody regardless of the availability of other medical resources, which could be a powerful tool for society to diagnose COVID-19.
AB - Scientific evidence shows that acoustic analysis could be an indicator for diagnosing COVID-19. From analyzing recorded breath sounds on smartphones, it is discovered that patients with COVID-19 have different patterns in both the time domain and frequency domain. These patterns are used in this paper to diagnose the infection of COVID-19. Statistics of the sound signals, analysis in the frequency domain, and Mel-Frequency Cepstral Coefficients (MFCCs) are then calculated and applied in two classifiers, k-Nearest Neighbors (kNN) and Convolutional Neural Network (CNN), to diagnose whether a user is contracted with COVID-19 or not. Test results show that, amazingly, an accuracy of over 97% could be achieved with a CNN classifier and more than 85% on kNN with optimized features. Optimization methods for selecting the best features and using various metrics to evaluate the performance are also demonstrated in this paper. Owing to the high accuracy of the CNN model, the CNN model was implemented in an Android app to diagnose COVID-19 with a probability to indicate the confidence level. The initial medical test shows a similar test result between the method proposed in this paper and the lateral flow method, which indicates that the proposed method is feasible and effective. Because of the use of breath sound and tested on the smartphone, this method could be used by everybody regardless of the availability of other medical resources, which could be a powerful tool for society to diagnose COVID-19.
KW - Acoustic analysis
KW - Breath sound
KW - COVID-19
KW - Convolutional Neural Network (CNN)
KW - k-Nearest Neighbors (kNN)
UR - http://www.scopus.com/inward/record.url?scp=85129479325&partnerID=8YFLogxK
U2 - 10.1016/j.jbi.2022.104078
DO - 10.1016/j.jbi.2022.104078
M3 - Comment/debate
C2 - 35489595
AN - SCOPUS:85129479325
SN - 1532-0464
VL - 130
JO - Journal of Biomedical Informatics
JF - Journal of Biomedical Informatics
M1 - 104078
ER -