Skip to main content

Covariance Regularization for Probabilistic Linear Discriminant Analysis

ZHIYUAN PENG (CUHK); Mingjie Shao (The Chinese University of Hong Kong, Shandong University ); Xuanji He (meituan); Xu Li (ARC Lab, Tencent); Tan Lee (The Chinese University of Hong Kong); Ke Ding (meituan); Guanglu Wan (Meituan)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

Probabilistic linear discriminant analysis (PLDA) is commonly used in speaker verification systems to score the similarity of speaker embeddings. Recent studies improved the performance of PLDA in domain-matched conditions by diagonalizing its covariance. We suspect such a brutal pruning approach could eliminate its capacity in modeling dimension correlation of speaker embeddings, leading to inadequate performance with domain adaptation. This paper explores two alternative covariance regularization approaches, namely, interpolated PLDA and sparse PLDA, to tackle the problem. The interpolated PLDA incorporates the prior knowledge from cosine scoring to interpolate the covariance of PLDA. The sparse PLDA introduces a sparsity penalty to update the covariance. Experimental results demonstrate that both approaches outperform diagonal regularization noticeably with domain adaptation. In addition, in-domain data can be significantly reduced when training sparse PLDA for domain adaptation.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00