TY - CHAP
T1 - DECOUPLING MAGNITUDE AND PHASE ESTIMATION WITH DEEP ResUNet FOR MUSIC SOURCE SEPARATION
AU - Kong, Qiuqiang
AU - Cao, Yin
AU - Liu, Haohe
AU - Choi, Keunwoo
AU - Wang, Yuxuan
N1 - Publisher Copyright:
© F. Author, S. Author, and T. Author. Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
PY - 2021
Y1 - 2021
N2 - Deep neural network based methods have been success-fully applied to music source separation. They typically learn a mapping from a mixture spectrogram to a set of source spectrograms, all with magnitudes only. This approach has several limitations: 1) its incorrect phase recon-struction degrades the performance, 2) it limits the magnitude of masks between 0 and 1 while we observe that 22% of time-frequency bins have ideal ratio mask values of over 1 in a popular dataset, MUSDB18, 3) its poten-tial on very deep architectures is under-explored. Our proposed system is designed to overcome these. First, we propose to estimate phases by estimating complex ideal ratio masks (cIRMs) where we decouple the estimation of cIRMs into magnitude and phase estimations. Sec-ond, we extend the separation method to effectively al-low the magnitude of the mask to be larger than 1. Fi-nally, we propose a residual UNet architecture with up to 143 layers. Our proposed system achieves a state-of-the-art MSS result on the MUSDB18 dataset, espe-cially, a SDR of 8.98 dB on vocals, outperforming the previous best performance of 7.24 dB. The source code is available at: https://github.com/bytedance/ music_source:separation.
AB - Deep neural network based methods have been success-fully applied to music source separation. They typically learn a mapping from a mixture spectrogram to a set of source spectrograms, all with magnitudes only. This approach has several limitations: 1) its incorrect phase recon-struction degrades the performance, 2) it limits the magnitude of masks between 0 and 1 while we observe that 22% of time-frequency bins have ideal ratio mask values of over 1 in a popular dataset, MUSDB18, 3) its poten-tial on very deep architectures is under-explored. Our proposed system is designed to overcome these. First, we propose to estimate phases by estimating complex ideal ratio masks (cIRMs) where we decouple the estimation of cIRMs into magnitude and phase estimations. Sec-ond, we extend the separation method to effectively al-low the magnitude of the mask to be larger than 1. Fi-nally, we propose a residual UNet architecture with up to 143 layers. Our proposed system achieves a state-of-the-art MSS result on the MUSDB18 dataset, espe-cially, a SDR of 8.98 dB on vocals, outperforming the previous best performance of 7.24 dB. The source code is available at: https://github.com/bytedance/ music_source:separation.
UR - http://www.scopus.com/inward/record.url?scp=85219541499&partnerID=8YFLogxK
M3 - Chapter
AN - SCOPUS:85219541499
T3 - Proceedings of the International Society for Music Information Retrieval Conference
SP - 342
EP - 349
BT - Proceedings of the International Society for Music Information Retrieval Conference
PB - International Society for Music Information Retrieval
ER -