Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:13:48
08 May 2022

The task of manipulating the level and/or effects of individual instruments to recompose a mixture of recordings, or remixing, is common across a variety of applications such as music production, audio-visual post-production, podcasts, and more. This process, however, traditionally requires access to individual source recordings, restricting the creative process. To work around this, source separation algorithms can separate a mixture into its respective components. Then, a user can adjust their levels and mix them back together. This two-step approach, however, still suffers from audible artifacts and motivates further work. In this work, we re-purpose Conv-TasNet, a well-known source separation model, into two neural remixing architectures that learn to remix directly rather than just to separate sources. We use an explicit loss term that directly measures remix quality and jointly optimize it with a separation loss. We evaluate our methods using the Slakh and MUSDB18 datasets and report remixing performance as well as the impact on source separation as a byproduct. Our results suggest that learning-to-remix significantly outperforms a strong separation baseline and is particularly useful for small volume changes.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00