Skip to main navigation Skip to search Skip to main content

From Sharpness to Better Generalization for Speech Deepfake Detection

  • Wen Huang
  • , Xuechen Liu
  • , Xin Wang
  • , Junichi Yamagishi*
  • , Yanmin Qian*
  • *Corresponding author for this work
  • Shanghai Jiao Tong University
  • Research Organization of Information and Systems, National Institute of Informatics

Research output: Contribution to journalConference articlepeer-review

Abstract

Generalization remains a critical challenge in speech deepfake detection (SDD). While various approaches aim to improve robustness, generalization is typically assessed through performance metrics like equal error rate without a theoretical framework to explain model performance. This work investigates sharpness as a theoretical proxy for generalization in SDD. We analyze how sharpness responds to domain shifts and find it increases in unseen conditions, indicating higher model sensitivity. Based on this, we apply Sharpness-Aware Minimization (SAM) to reduce sharpness explicitly, leading to better and more stable performance across diverse unseen test sets. Furthermore, correlation analysis confirms a statistically significant relationship between sharpness and generalization in most test settings. These findings suggest that sharpness can serve as a theoretical indicator for generalization in SDD and that sharpness-aware training offers a promising strategy for improving robustness.

Original languageEnglish
Pages (from-to)5338-5342
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
DOIs
Publication statusPublished - 2025
Externally publishedYes
Event26th Interspeech Conference 2025 - Rotterdam, Netherlands
Duration: 17 Aug 202521 Aug 2025

Keywords

  • generalization
  • sharpness-aware minimization
  • speech deepfake detection

Cite this