TY - GEN
T1 - Predicting and Visualizing Covid-19 Identification by a Hybrid Machine Learning and Pre-trained Model
AU - Cai, Ruilin
AU - Li, Weimei
AU - Lu, Han
AU - Chen, Jiangang
AU - Zou, Dongdong
AU - Chen, Xin
AU - Pan, Yongchao
AU - Feng, Liang
AU - Qi, Jun
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The emergence of the Omicron variant in Shanghai in 2022 has highlighted the need for effective diagnostic tools for COVID-19. Recent studies have indicated that cough sounds generate distinctive features capable of distinguishing infected individuals from healthy ones. This study proposes a hybrid approach that combines machine learning and pre-trained models, utilizing audio features such as Mel-frequency cepstral coefficients and waveforms as inputs to train both types of models for accurate identification of COVID-19 patients and healthy individuals. The study employs a dataset collected from hospitals in Shanghai, China, comprising 78 participants, including COVID-19 positive and negative individuals. The proposed method demonstrates superior performance in diagnosing COVID-19 compared to existing mainstream machine learning algorithms. Furthermore, decisive important audio features for the COVID-19 positive classifier are identified via SHAP values for feature importance. Overall, the proposed approach achieves excellent diagnostic accuracy for COVID-19, outperforming current mainstream machine learning methods. With its multiple strengths in performance, speed, and usability, this algorithm shows great promise in enabling large-scale screening and aiding the containment of future widespread infections.
AB - The emergence of the Omicron variant in Shanghai in 2022 has highlighted the need for effective diagnostic tools for COVID-19. Recent studies have indicated that cough sounds generate distinctive features capable of distinguishing infected individuals from healthy ones. This study proposes a hybrid approach that combines machine learning and pre-trained models, utilizing audio features such as Mel-frequency cepstral coefficients and waveforms as inputs to train both types of models for accurate identification of COVID-19 patients and healthy individuals. The study employs a dataset collected from hospitals in Shanghai, China, comprising 78 participants, including COVID-19 positive and negative individuals. The proposed method demonstrates superior performance in diagnosing COVID-19 compared to existing mainstream machine learning algorithms. Furthermore, decisive important audio features for the COVID-19 positive classifier are identified via SHAP values for feature importance. Overall, the proposed approach achieves excellent diagnostic accuracy for COVID-19, outperforming current mainstream machine learning methods. With its multiple strengths in performance, speed, and usability, this algorithm shows great promise in enabling large-scale screening and aiding the containment of future widespread infections.
KW - Audio classification
KW - COVID-19 diagnosis
KW - Machine learning
KW - Pre trained model
UR - http://www.scopus.com/inward/record.url?scp=85217283375&partnerID=8YFLogxK
U2 - 10.1109/BIBM62325.2024.10821829
DO - 10.1109/BIBM62325.2024.10821829
M3 - Conference Proceeding
AN - SCOPUS:85217283375
T3 - Proceedings - 2024 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024
SP - 5936
EP - 5943
BT - Proceedings - 2024 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024
A2 - Cannataro, Mario
A2 - Zheng, Huiru
A2 - Gao, Lin
A2 - Cheng, Jianlin
A2 - de Miranda, Joao Luis
A2 - Zumpano, Ester
A2 - Hu, Xiaohua
A2 - Cho, Young-Rae
A2 - Park, Taesung
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024
Y2 - 3 December 2024 through 6 December 2024
ER -