A Dataset and Benchmark towards Multi-Modal Face Anti-Spoofing under Surveillance Scenarios

Xudong Chen, Shugong Xu*, Qiaobin Ji, Shan Cao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)

Abstract

Face Anti-spoofing (FAS) is a challenging problem due to complex serving scenarios and diverse face presentation attack patterns. Especially when captured images are low-resolution, blurry, and coming from different domains, the performance of FAS will degrade significantly. The existing multi-modal FAS datasets rarely pay attention to the cross-domain problems under deployment scenarios, which is not conducive to the study of model performance. To solve these problems, we explore the fine-grained differences between multi-modal cameras and construct a cross-domain multi-modal FAS dataset under surveillance scenarios called GREAT-FASD-S. Besides, we propose an Attention based Face Anti-spoofing network with Feature Augment (AFA) to solve the FAS towards low-quality face images. It consists of the depthwise separable attention module (DAM) and the multi-modal based feature augment module (MFAM). Our model can achieve state-of-the-art performance on the CASIA-SURF dataset and our proposed GREAT-FASD-S dataset.

Original languageEnglish
Article number9328436
Pages (from-to)28140-28155
Number of pages16
JournalIEEE Access
Volume9
DOIs
Publication statusPublished - 2021
Externally publishedYes

Keywords

  • cross domain
  • Face anti-spoofing
  • multi-modal
  • surveillance scenarios

Fingerprint

Dive into the research topics of 'A Dataset and Benchmark towards Multi-Modal Face Anti-Spoofing under Surveillance Scenarios'. Together they form a unique fingerprint.

Cite this