ComSense-CNN: acoustic event classification via 1D convolutional neural network with compressed sensing

Pooi Shiang Tan, Kian Ming Lim*, Cheah Heng Tan, Chin Poo Lee, Lee Chung Kwek

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

Sound plays an important role in human daily life as humans use sound to communicate with each other and to understand the events occurring in the surroundings. This has prompted the researchers to further study on how to automatically identify the event that is happening by analyzing the acoustic signal. This paper presents a deep learning model enhanced by compressed sensing techniques for acoustic event classification. The compressed sensing first transforms the input acoustic signal into a reconstructed signal to reduce the noise in the input acoustic signal. The reconstructed signals are then fed into a 1-dimensional convolutional neural network (1D-CNN) to train a deep learning model for the acoustic event classification. In addition, the dropout regularization is leveraged in the 1D-CNN to mitigate the overfitting problems. The proposed compressed sensing with 1D-CNN was evaluated on three benchmark datasets, namely Soundscapes1, Soundscapes2, and UrbanSound8K, and achieved F1-scores of 80.5%, 81.1%, and 69.2%, respectively.

Original languageEnglish
Pages (from-to)735-741
Number of pages7
JournalSignal, Image and Video Processing
Volume17
Issue number3
DOIs
Publication statusPublished - Apr 2023
Externally publishedYes

Keywords

  • 1D convolutional neural network
  • 1D-CNN
  • Acoustic event classification
  • Compressed sensing

Fingerprint

Dive into the research topics of 'ComSense-CNN: acoustic event classification via 1D convolutional neural network with compressed sensing'. Together they form a unique fingerprint.

Cite this