TY - GEN
T1 - TransVAT
T2 - 2023 CAA Symposium on Fault Detection, Supervision and Safety for Technical Processes, SAFEPROCESS 2023
AU - Zhan, Yifan
AU - Yang, Rui
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Fault diagnosis plays a critical role in ensuring safety and minimizing downtime across various industries. However, due to the difficulties in acquiring fault signals in practical engineering systems, labeled samples are often scarce. To address this issue, few-shot learning has emerged as a promising approach for bearing fault diagnosis in recent years. Recent studies with promising results have demonstrated the effectiveness of using Transformer and variational attention in this field. Compared to conventional methods, the Transformer has demonstrated superior performance in feature extraction and classification. Variational attention, on the other hand, permits a distribution of attention weights and enhances the interpretability of models. This method can identify pertinent features and offer perceptions of the root causes of faults. Therefore, the proposed model, TransVAT, is based on the relation network of few-shot learning and replaces the dot-product attention in the Transformer encoder with variational attention for feature extraction. The experimental findings demonstrate that the model performs well with limited data, especially on the one-shot task.
AB - Fault diagnosis plays a critical role in ensuring safety and minimizing downtime across various industries. However, due to the difficulties in acquiring fault signals in practical engineering systems, labeled samples are often scarce. To address this issue, few-shot learning has emerged as a promising approach for bearing fault diagnosis in recent years. Recent studies with promising results have demonstrated the effectiveness of using Transformer and variational attention in this field. Compared to conventional methods, the Transformer has demonstrated superior performance in feature extraction and classification. Variational attention, on the other hand, permits a distribution of attention weights and enhances the interpretability of models. This method can identify pertinent features and offer perceptions of the root causes of faults. Therefore, the proposed model, TransVAT, is based on the relation network of few-shot learning and replaces the dot-product attention in the Transformer encoder with variational attention for feature extraction. The experimental findings demonstrate that the model performs well with limited data, especially on the one-shot task.
KW - Deep Learning
KW - Fault Diagnosis
KW - Few-Shot Learning
KW - Transformer
KW - Variational Attention
UR - http://www.scopus.com/inward/record.url?scp=85178029057&partnerID=8YFLogxK
U2 - 10.1109/SAFEPROCESS58597.2023.10295900
DO - 10.1109/SAFEPROCESS58597.2023.10295900
M3 - Conference Proceeding
AN - SCOPUS:85178029057
T3 - Proceedings of 2023 CAA Symposium on Fault Detection, Supervision and Safety for Technical Processes, SAFEPROCESS 2023
BT - Proceedings of 2023 CAA Symposium on Fault Detection, Supervision and Safety for Technical Processes, SAFEPROCESS 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 22 September 2023 through 24 September 2023
ER -