Two-stage model re-optimization and application in face recognition

Jianyu Qian, Shiyi Mu, Hengjie Lu, Shugong Xu*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Face recognition technology has achieved mature performance in various fields and has been widely applied on edge devices with limited computational resources. However, in practical application scenarios, the deployment platforms are often predetermined, and the final deployed models are the result of trade-offs between the computational and storage capabilities of the platforms. These models are difficult to adjust once determined. Therefore, how to improve the performance of existing face recognition models without affecting the entire deployment process remains a topic of practical significance and research value. Additionally, with the rapid development of deep learning, there exists a vast array of high-performance open-source pre-trained models on the Internet. How to utilize these models efficiently is also a subject worthy of investigation. To address the aforementioned issues, this paper introduces a novel training framework called two-stage model re-optimization (TSMR). TSMR enhances face recognition performance by leveraging knowledge distillation, model re-parameterization and adversarial learning techniques. In particular, TSMR improves performance without increasing inference latency, modifying network architecture, or introducing additional computational cost. It is a versatile and effective framework that can be applied to any CNN-based recognition network. The experimental results show the effectiveness of TSMR in improving the performance of lightweight face recognition models. Across multiple datasets, including CALFW, CPLFW, YTF and AgeDB-30, TSMR achieves an average accuracy improvement ranging from 1% to 2%. Notably, when TSMR is applied with FDFaceNet as the baseline, it achieves an impressive accuracy of 95.58% on the LFW dataset, surpassing the state-of-the-art performance in lightweight face recognition networks.

Original languageEnglish
Article number130805
JournalNeurocomputing
Volume651
DOIs
Publication statusPublished - 28 Oct 2025

Keywords

  • Face recognition
  • Knowledge distillation
  • Lightweight
  • Re-parameterization

Cite this