Audio Deepfake Detection System with Neural Stitching for ADD 2022
Rui Yan, Cheng Wen, Shuran Zhou, Tingwei Guo, Wei Zou, Xiangang Li
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:08:01
This paper describes our best system and methodology for ADD 2022: The First Audio Deep Synthesis Detection Challenge. The very same system was used for both two rounds of evaluation in Track 3.2 with similar training methodology. The first round of Track 3.2 data is generated from Text-to-Speech(TTS) or voice conversion (VC) algorithms, while the second round of data consists of generated fake audio from other participants in Track 3.1, aming to spoof our systems. Our systems uses a standard 34-layer ResNet , with multi-head attention pooling to learn the discriminative embedding for fake audio and spoof detection. We further utilize neural stitching to boost the model's generalization capability in order to perform equally well in different tasks, and more details will be explained in the following sessions. The experiments show that our proposed method outperforms all other systems with 10.1% equal error rate(EER) in Track 3.2.