LAVRF: Sign language recognition via Lightweight Attentive VGG16 with Random Forest

Edmond Li Ren Ewe, Chin Poo Lee*, Kian Ming Lim, Lee Chung Kwek, Ali Alqahtani

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Sign language recognition presents significant challenges due to the intricate nature of hand gestures and the necessity to capture fine-grained details. In response to these challenges, a novel approach is proposed-Lightweight Attentive VGG16 with Random Forest (LAVRF) model. LAVRF introduces a refined adaptation of the VGG16 model integrated with attention modules, complemented by a Random Forest classifier. By streamlining the VGG16 architecture, the Lightweight Attentive VGG16 effectively manages complexity while incorporating attention mechanisms that dynamically concentrate on pertinent regions within input images, resulting in enhanced representation learning. Leveraging the Random Forest classifier provides notable benefits, including proficient handling of high-dimensional feature representations, reduction of variance and overfitting concerns, and resilience against noisy and incomplete data. Additionally, the model performance is further optimized through hyperparameter optimization, utilizing the Optuna in conjunction with hill climbing, which efficiently explores the hyperparameter space to discover optimal configurations. The proposed LAVRF model demonstrates outstanding accuracy on three datasets, achieving remarkable results of 99.98%, 99.90%, and 100% on the American Sign Language, American Sign Language with Digits, and NUS Hand Posture datasets, respectively.

Original languageEnglish
Article numbere0298699
JournalPLoS ONE
Volume19
Issue number4 April
DOIs
Publication statusPublished - Apr 2024
Externally publishedYes

Fingerprint

Dive into the research topics of 'LAVRF: Sign language recognition via Lightweight Attentive VGG16 with Random Forest'. Together they form a unique fingerprint.

Cite this