TY - GEN
T1 - Decoupling magnitude and phase estimation with deep resunet for music source separation
AU - Kong, Qiuqiang
AU - Cao, Yin
AU - Liu, Haohe
AU - Choi, Keunwoo
AU - Wang, Yuxuan
N1 - Publisher Copyright:
© 2021 Proceedings of the 22nd International Conference on Music Information Retrieval, ISMIR 2021. All Rights Reserved.
PY - 2021
Y1 - 2021
N2 - Deep neural network based methods have been successfully applied to music source separation. They typically learn a mapping from a mixture spectrogram to a set of source spectrograms, all with magnitudes only. This approach has several limitations: 1) its incorrect phase reconstruction degrades the performance, 2) it limits the magnitude of masks between 0 and 1 while we observe that 22% of time-frequency bins have ideal ratio mask values of over 1 in a popular dataset, MUSDB18, 3) its potential on very deep architectures is under-explored. Our proposed system is designed to overcome these. First, we propose to estimate phases by estimating complex ideal ratio masks (cIRMs) where we decouple the estimation of cIRMs into magnitude and phase estimations. Second, we extend the separation method to effectively allow the magnitude of the mask to be larger than 1. Finally, we propose a residual UNet architecture with up to 143 layers. Our proposed system achieves a state-of-the-art MSS result on the MUSDB18 dataset, especially, a SDR of 8.98 dB on vocals, outperforming the previous best performance of 7.24 dB. The source code is available at: https://github.com/bytedance/ music_source_separation.
AB - Deep neural network based methods have been successfully applied to music source separation. They typically learn a mapping from a mixture spectrogram to a set of source spectrograms, all with magnitudes only. This approach has several limitations: 1) its incorrect phase reconstruction degrades the performance, 2) it limits the magnitude of masks between 0 and 1 while we observe that 22% of time-frequency bins have ideal ratio mask values of over 1 in a popular dataset, MUSDB18, 3) its potential on very deep architectures is under-explored. Our proposed system is designed to overcome these. First, we propose to estimate phases by estimating complex ideal ratio masks (cIRMs) where we decouple the estimation of cIRMs into magnitude and phase estimations. Second, we extend the separation method to effectively allow the magnitude of the mask to be larger than 1. Finally, we propose a residual UNet architecture with up to 143 layers. Our proposed system achieves a state-of-the-art MSS result on the MUSDB18 dataset, especially, a SDR of 8.98 dB on vocals, outperforming the previous best performance of 7.24 dB. The source code is available at: https://github.com/bytedance/ music_source_separation.
UR - http://www.scopus.com/inward/record.url?scp=85184123939&partnerID=8YFLogxK
M3 - Conference Proceeding
AN - SCOPUS:85184123939
T3 - Proceedings of the 22nd International Conference on Music Information Retrieval, ISMIR 2021
SP - 342
EP - 349
BT - Proceedings of the 22nd International Conference on Music Information Retrieval, ISMIR 2021
PB - International Society for Music Information Retrieval
T2 - 22nd International Conference on Music Information Retrieval, ISMIR 2021
Y2 - 7 November 2021 through 12 November 2021
ER -