Deep Fuzzy Multi-Teacher Distillation Network for Medical Visual Question Answering

Yishu Liu, Bingzhi Chen, Shuihua Wang, Guangming Lu, Zheng Zhang

Research output: Contribution to journalArticlepeer-review


Medical visual question answering (Medical VQA) is a critical cross-modal interaction task that garnered considerable attention in the medical domain. Several existing methods commonly leverage the vision-and-language pre-training paradigms to mitigate the limitation of small-scale data. Nevertheless, most of them still suffer from two challenges that remain for further research: 1) Limited research focuses on distilling representation from a complete modality to guide the representation learning of masked data in other modalities. 2) Multi-modal fusion based on self-attention mechanisms cannot effectively handle the inherent uncertainty and vagueness of information interaction across modalities. To mitigate these issues, in this paper, we propose a novel Deep Fuzzy Multi-teacher Distillation (DFMD) Network for medical visual question answering, which can take advantage of fuzzy logic to model the uncertainties from vison-language representations across modalities in a multi-teacher framework. Specifically, a multi-teacher knowledge distillation (MKD) module is conceived to assist in reconstructing the missing semantics under the supervision signal generated by teachers from the other complete modality, achieving more robust semantic interaction across modalities. Incorporating insights from fuzzy logic theory, we propose a noise-robust encoder called FuzBERT that enables our DFMD model to reduce the imprecision and ambiguity in feature representation during the multi-modal interaction process. To the best of our knowledge, our work is <italic>the first attempt</italic> to combine fuzzy logic theory with the transformer-based encoder to effectively learn multi-modal representation for medical visual question answering. Experimental results on the VQA-RAD and SLAKE datasets consistently demonstrate the superiority of our proposed DFMD method over state-of-the-art baselines.

Original languageEnglish
Pages (from-to)1-15
Number of pages15
JournalIEEE Transactions on Fuzzy Systems
Publication statusAccepted/In press - 2024


  • Fuzzy deep learning
  • fuzzy logic
  • knowledge distillation
  • medical visual question answering


Dive into the research topics of 'Deep Fuzzy Multi-Teacher Distillation Network for Medical Visual Question Answering'. Together they form a unique fingerprint.

Cite this