SFT: Few-Shot Learning via Self-Supervised Feature Fusion With Transformer: Few-Shot Learning via Self-Supervised Feature Fusion With Transformer

Jit Yan Lim, Kian Ming Lim*, Chin Poo Lee, Yong Xuan Tan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The few-shot learning paradigm aims to generalize to unseen tasks with limited samples. However, a focus solely on class-level discrimination may fall short of achieving robust generalization, especially when neglecting instance diversity and discriminability. This study introduces a metric-based few-shot approach, named Self-supervised Feature Fusion with Transformer (SFT), which integrates self-supervised learning with a transformer. SFT addresses the limitations of previous approaches by employing two distinct self-supervised tasks in separate models during pre-training, thus enhancing both instance diversity and discriminability in the feature space. The training process unfolds in two stages: pre-training and transfer learning. In pre-training, each model undergoes training with specific self-supervised tasks to harness the benefits of enhanced feature space. In the subsequent transfer learning stage, model weights are frozen, acting as feature extractors. The features from both models are amalgamated using a feature fusion technique and are transformed into task-specific features by a transformer, boosting discrimination on unseen tasks. The combined features enable the model to learn a well-generalized representation, effectively tackling the challenges posed by few-shot tasks. The proposed SFT method achieves state-of-the-art results on three benchmark datasets in few-shot image classification.

Original languageEnglish
Pages (from-to)86690-86703
Number of pages14
JournalIEEE Access
Volume12
DOIs
Publication statusPublished - 2024
Externally publishedYes

Keywords

  • contrastive learning
  • feature fusion
  • Few-shot learning
  • self-supervised learning
  • transformer

Fingerprint

Dive into the research topics of 'SFT: Few-Shot Learning via Self-Supervised Feature Fusion With Transformer: Few-Shot Learning via Self-Supervised Feature Fusion With Transformer'. Together they form a unique fingerprint.

Cite this