Skip to main content

Continuous Speech Separation: Dataset And Analysis

Zhuo Chen, Takuya Yoshioka, Liang Lu, Tianyan Zhou, Zhong Meng, Yi Luo, Xiong Xiao, Jinyu Li, Jian Wu

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 15:30
04 May 2020

This paper describes a dataset and protocols for evaluating continuous speech separation (CSS) algorithms. Most prior studies on speech separation use pre-segmented signals of artificially mixed sufficiently overlapped utterances, and the algorithms are evaluated based on signal-to-distortion ratio. However, in natural conversations, speech audio is continuous, containing both overlapped and overlap-free components. In addition, the signal-based metrics have very weak correlations with automatic speech recognition (ASR) accuracy. We think that not only does this make it hard to assess the practical relevance of the tested algorithms, it also hinders researchers from developing systems that can be readily applied to real scenarios. In this paper, we define CSS as a task of generating a set of non-overlapped speech signals from a continuous audio stream that contains multiple utterances that are partially overlapped by a varying degree. A new real recorded dataset, called LibriCSS, is derived from LibriSpeech by concatenating the corpus utterances to simulate a conversation and capturing the audio replays with far-field microphones. A Kaldi-based ASR evaluation protocol is established. Several aspects of a recently proposed speaker-independent CSS algorithm are investigated on this dataset. The data and evaluation scripts will be made available to facilitate the research in this direction.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00