Polyphonic Sound Event Detection Using Transposed Convolutional Recurrent Neural Network
Chandra Churh Chatterjee, Manjunath Mulimani, Shashidhar G Koolagudi
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 17:42
In this paper we propose a Transposed Convolutional Recurrent Neural Network (TCRNN) architecture for polyphonic sound event recognition. Transposed convolution layer, which caries out a regular convolution operation but reverts the spatial transformation and it is combined with a bidirectional Recurrent Neural Network (RNN) to get TCRNN. Instead of the traditional mel spectrogram features, the proposed methodology incorporates mel-IFgram (Instantaneous Frequency spectrogram) features. The performance of the proposed approach is evaluated on sound events of publicly available TUT-SED 2016 and Joint sound scene and polyphonic sound event recognition datasets. Results show that the proposed approach outperforms state-of-the-art methods.