Abstract
The focus of recent few-shot learning research has been on the development of learning methods that can quickly adapt to unseen tasks with small amounts of data and low computational cost. In order to achieve higher performance in few-shot learning tasks, the generalizability of the method is essential to enable it generalize well from seen tasks to unseen tasks with limited number of samples. In this work, we investigate a new metric-based few-shot learning framework which transfers the knowledge from another effective classification model to produce well generalized embedding and improve the effectiveness in handling unseen tasks. The idea of our proposed Efficient-PrototypicalNet involves transfer learning, knowledge distillation, and few-shot learning. We employed a pre-trained model as a feature extractor to obtain useful features from tasks and decrease the task complexity. These features reduce the training difficulty in few-shot learning and increase the performance. Besides that, we further apply knowledge distillation to our framework and achieve extra performance improvement. The proposed Efficient-PrototypicalNet was evaluated on five benchmark datasets, i.e., Omniglot, miniImageNet, tieredImageNet, CIFAR-FS, and FC100. The proposed Efficient-PrototypicalNet achieved the state-of-the-art performance on most datasets in the 5-way K-shot image classification task, especially on the miniImageNet dataset.
Original language | English |
---|---|
Pages (from-to) | 327-337 |
Number of pages | 11 |
Journal | Neurocomputing |
Volume | 459 |
DOIs | |
Publication status | Published - 7 Oct 2021 |
Externally published | Yes |
Keywords
- EfficientNet
- Few-shot learning
- Knowledge distillation
- Meta learning
- Prototypical network
- Transfer learning