Improving deep neural network performance by integrating kernelized Min-Max objective

Qiu Feng Wang, Kai Yao, Rui Zhang, Amir Hussain, Kaizhu Huang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

Deep neural networks (DNN), such as convolutional neural networks (CNN) have been widely used for object recognition. However, they are usually unable to ensure the required intra-class compactness and inter-class separability in the kernel space. These are known to be important in pattern recognition for achieving both robustness and accuracy. In this paper, we propose to integrate a kernelized Min-Max objective in the DNN training in order to explicitly enforce both kernelized within-class compactness and between-class margin. The involved kernel space is implicitly mapped from the feature space associated with a certain upper layer of DNN by exploiting a kernel trick, while the Min-Max objective in this space is interpolated with the original DNN loss function and finally optimized in the training phase. With a very small additional computation cost, the proposed strategy can be easily integrated in different DNN models without changing any other part of the original model. The comparative recognition accuracy of the proposed method is evaluated with multiple DNN models (including shallow CNN, deep CNN and deep residual neural network models) on two benchmark datasets: CIFAR-10 and CIFAR-100. Extensive experimental results demonstrate that the integration of kernelized Min-Max objective in the training of DNN models can achieve better results compared to state-of-the-art models, without incurring additional model complexity.

Original languageEnglish
Pages (from-to)82-90
Number of pages9
JournalNeurocomputing
Volume408
DOIs
Publication statusPublished - 30 Sept 2020

Keywords

  • Convolutional neural network
  • Deep neural network
  • Kernel space
  • Kernelized Min-Max objective
  • Min-Max strategy
  • Object recognition

Cite this