-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 01:11:45
Nonnegative Matrix Factorization (NMF) is a powerful technique for factorizing, decomposing, and explaining data. For example, in the field of music information retrieval, NMF has been applied for audio decomposition to decompose a music recording's magnitude spectrogram into musically meaningful spectral and activation patterns. Thanks to nonnegativity constraints in NMF and the multiplicative update rules that preserve these constraints in the training stage, it is easy to incorporate additional domain knowledge that guides the factorization to yield interpretable results. On the other hand, deep neural networks (DNNs), which can learn complex non-linear patterns in a hierarchical manner, have become omnipresent thanks to the availability of suitable hardware and software tools. However, deep learning (DL) models are often hard to interpret and control due to the massive number of trainable parameters. In this presentation, we review and discuss current research directions that combine and transfer ideas between NMF-based and DL-based learning approaches. In particular, we show how NMF and its score-informed variants can be simulated by autoencoder-like neural network architectures in combination with projected gradient descent methods. Doing so, we aim to understand better the interaction between various regularization techniques and DL-based learning procedures while improving the interpretability of DL-based decomposition results.