Face Presentation Attack Detection via Ensemble Learning Algorithm

Kim Wang Lee, Jit Yan Lim, Kian Ming Lim, Chin Poo Lee

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

Abstract

Face recognition systems are vulnerable to a variety of presentation assaults, including print, mask, and replay attacks. To successfully address the issues faced by these assaults, we offer a deep learning-based technique based on the VGG19, ResNet152, and DenseNet161 models in this study. We also investigate the ensemble learning bagging strategy to improve classification reliability further. The experimental findings show that our proposed strategy is successful at recognising and categorising presentation assaults. The ensemble learning approach significantly increases overall accuracy when compared with training each model independently, producing groundbreaking outcomes on the investigated datasets. Based on the results, we were able to propose bagging technique, which performed quite well in Replay-Attack and OULU-NPU with 1.22% and 4.86%, respectively.

Original languageEnglish
Title of host publication2023 IEEE 11th Conference on Systems, Process and Control, ICSPC 2023 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages101-106
Number of pages6
ISBN (Electronic)9798350340860
DOIs
Publication statusPublished - 2023
Externally publishedYes
Event11th IEEE Conference on Systems, Process and Control, ICSPC 2023 - Malacca, Malaysia
Duration: 16 Dec 2023 → …

Publication series

Name2023 IEEE 11th Conference on Systems, Process and Control, ICSPC 2023 - Proceedings

Conference

Conference11th IEEE Conference on Systems, Process and Control, ICSPC 2023
Country/TerritoryMalaysia
CityMalacca
Period16/12/23 → …

Keywords

  • Bagging approach
  • Deep Learning
  • Ensemble learning
  • Face anti-spoofing

Fingerprint

Dive into the research topics of 'Face Presentation Attack Detection via Ensemble Learning Algorithm'. Together they form a unique fingerprint.

Cite this