Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 13:46
04 May 2020

Deep-learning based acoustic echo cancellation (AEC) methods have been shown to outperform the classical techniques. The main drawback of the learning-based AEC is its dependency on the training set, which limits its practical deployment in mobile devices and unconstrained environments. This paper proposes a context-aware deep AEC (CAD-AEC) by introducing two main components. The first component of CAD-AEC borrows ideas from the classical AEC and performs frequency domain adaptive filtering of the microphone signal, to provide the deep AEC network with features that have less dependency on the development context. The second component is a deep contextual-attention module (CAM), inserted between the recurrent encoder and decoder architectures. The deep CAM adaptively scales the encoder output during inference with calculated attention weights that depend on the context. Experiments in both matched and mismatched training and testing environments, show that the proposed CAD-AEC can robustly achieve better echo return loss enhancement (ERLE) and perceptual speech quality compared to the previous classical and deep learning techniques.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00