Open Access
Issue
EPJ Web Conf.
Volume 328, 2025
First International Conference on Engineering and Technology for a Sustainable Future (ICETSF-2025)
Article Number 01031
Number of page(s) 9
DOI https://doi.org/10.1051/epjconf/202532801031
Published online 18 June 2025
  1. H. Romain, A. Khlif, F. Voituret, and M. Moussallam. "Spleeter: a fast and efficient music source separation tool with pre-trained models." Journal of Open Source Software 5, no. 50 (2020): 2154. [CrossRef] [Google Scholar]
  2. K. Qiuqiang, Y. Cao, H. Liu, K. Choi, and Y. Wang. "Decoupling magnitude and phase estimation with deep resunet for music source separation." arXiv preprint arXiv:2109.05418 (2021). [Google Scholar]
  3. R. Simon, F. Massa, and A. Défossez. "Hybrid transformers for music source separation." In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1–5. IEEE, 2023. [Google Scholar]
  4. C. Woosung, M. Kim, J. Chung, and S. Jung. "LaSAFT: Latent source attentive frequency transformation for conditioned source separation. " In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 171–175. IEEE, 2021. [Google Scholar]
  5. L. Yi, and N. Mesgarani. "Conv-tasnet: Surpassing ideal time-frequency magnitude masking for speech separation." IEEE/ACM transactions on audio, speech, and language processing 27, no. 8 (2019): 1256–1266. [CrossRef] [PubMed] [Google Scholar]
  6. S. Daniel, S. Ewert, and S. Dixon. "Wave-u-net: A multi-scale neural network for end-to-end audio source separation." arXiv preprint arXiv:1806.03185 (2018). [Google Scholar]
  7. J. Andreas, E. Humphrey, N. Montecchio, R. Bittner, A. Kumar, and T. Weyde. "Singing voice separation with deep u-net convolutional networks." (2017). [Google Scholar]
  8. S. Fabian-Robert, S. Uhlich, A. Liutkus, and Y. Mitsufuji. "Open-unmix-a reference implementation for music source separation." Journal of Open Source Software 4, no. 41 (2019): 1667. [CrossRef] [Google Scholar]
  9. D. Alexandre, N. Usunier, L. Bottou, and F. Bach. "Music source separation in the waveform domain." arXiv preprint arXiv:1911.13254 (2019). [Google Scholar]
  10. J. Xabier, E. Vincent, and G. Richard. "Fusion methods for speech enhancement and audio source separation." IEEE/ACM Transactions on Audio, Speech, and Language Processing 24, no. 7 (2016): 1266–1279. [CrossRef] [Google Scholar]
  11. Z. Ge, J. Darefsky, F. Jiang, A. Selitskiy, and Z. Duan. "Music source separation with generative flow." IEEE Signal Processing Letters 29 (2022): 2288–2292. [CrossRef] [Google Scholar]
  12. P.M. Syskind, D. Wang, J. Larsen, and U. Kjems. "Two-microphone separation of speech mixtures." IEEE Transactions on Neural networks 19, no. 3 (2008): 475–492. [CrossRef] [PubMed] [Google Scholar]
  13. A. Tülay, Y. Levin-Schwartz, and V.D. Calhoun. "Multimodal data fusion using source separation: Two effective models based on ICA and IVA and their properties." Proceedings of the IEEE 103, no. 9 (2015): 1478–1493. [CrossRef] [PubMed] [Google Scholar]
  14. R. Zafar, A. Liutkus, F.R. Stöter, S.I. Mimilakis, D. FitzGerald, and B. Pardo. "An overview of lead and accompaniment separation in music. " IEEE/A CM Transactions on Audio, Speech, and Language Processing 26, no. 8 (2018): 1307–1335. [CrossRef] [Google Scholar]
  15. S. Hiroshi, T. Kawamura, T. Nishikawa, A. Lee, and K. Shikano. "Blind source separation based on a fast-convergence algorithm combining ICA and beamforming." IEEE Transactions on Audio, speech, and language processing 14, no. 2 (2006): 666–678. [CrossRef] [Google Scholar]
  16. M.-M. Antonio, J.J. Carabias-Orti, P. Cabanas-Molero, F.J. Canadas-Quesada, and N. Ruiz-Reyes. "Multichannel blind music source separation using directivity-aware MNMF with harmonicity constraints." IEEE access 10 (2022): 17781–17795. [CrossRef] [Google Scholar]
  17. S.M. Ferianda, and S. Suyanto. "Music source separation using generative adversarial network and U-Net." In 2020 8th International Conference on Information and Communication Technology (ICoICT), pp. 1–6. IEEE, 2020. [Google Scholar]
  18. G. Enric, J. Pons, S. Pascual, and J. Serrà. "On loss functions and evaluation metrics for music source separation." In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 306–310. IEEE, 2022. [Google Scholar]
  19. T. Shun, and S. Arai. "Music Source Separation Using Deform-Conv Dense U-Net." In 2021 3rd International Conference on Cybernetics and Intelligent System (ICORIS), pp. 1–5. IEEE, 2021. [Google Scholar]
  20. F. Yijun, L. Hong, W. Zhu, and H. Ye. "A Highly Scalable Music Source Separation Method based on CGAN." In 2022 IEEE 6th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), pp. 508–512. IEEE, 2022. [Google Scholar]
  21. V. Emmanuel. "Musical source separation using time-frequency source priors." IEEE Transactions on Audio, Speech, and Language Processing 14, no. 1 (2005): 91–98. [Google Scholar]
  22. R. Adnan, and O. Hasan. "Formal analysis of continuous-time systems using Fourier transform." Journal of Symbolic Computation 90 (2019): 65–88. [CrossRef] [Google Scholar]
  23. R. Zafar, A. Liutkus, F.-R. Stöter, S.I. Mimilakis, and R. Bittner. "The MUSDB18 corpus for music separation." (2017). [Google Scholar]
  24. V.M. Alex, and D. Terzopoulos. "Multilinear (tensor) ICA and dimensionality reduction." In International Conference on Independent Component Analysis and Signal Separation, pp. 818–826. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.