Music Genre Classification with LSTM based on Time and Frequency Domain Features

Yinhui Yi, Xiaohui Zhu, Yong Yue, Wei Wang

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

8 Citations (Scopus)

Abstract

Deep features generated from deep learning models contain more information for music classification than short-term features. This paper uses a long-short term memory (LSTM) model to generate deep features and achieve music genre classification. Firstly, two short-term features of Zero crossing rate (ZCR) and mel-frequency spectral coefficients (MFCC) are extracted from music in digital form, which is a time-domain feature and frequency-domain feature, respectively. Then these two features are fed to LSTM to generate deep features. Finally, we use support vector machine (SVM) and k-nearest neighbors (KNN) respectively to classify the music genre based on these deep features. Experimental results show that using LSTM can significantly increase the accuracy of music genre classification.

Original languageEnglish
Title of host publication2021 IEEE 6th International Conference on Computer and Communication Systems, ICCCS 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages678-682
Number of pages5
ISBN (Electronic)9780738126043
DOIs
Publication statusPublished - 23 Apr 2021
Event6th IEEE International Conference on Computer and Communication Systems, ICCCS 2021 - Chengdu, China
Duration: 23 Apr 202126 Apr 2021

Publication series

Name2021 IEEE 6th International Conference on Computer and Communication Systems, ICCCS 2021

Conference

Conference6th IEEE International Conference on Computer and Communication Systems, ICCCS 2021
Country/TerritoryChina
CityChengdu
Period23/04/2126/04/21

Keywords

  • Deep features
  • LSTM
  • MFCC
  • Music classification
  • ZCR

Cite this