GENERALIZED PSEUDO-LABELING IN CONSISTENCY REGULARIZATION FOR SEMI-SUPERVISED LEARNING
Nikolaos Karaliolios, Florian Chabot, Camille Dupont, Hervé Le Borgne, Quoc-Cuong Pham, Romaric Audigier
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Semi-Supervised Learning (SSL) reduces annotation cost by exploiting large amounts of unlabeled data. A popular idea in SSL image classification is Pseudo-Labeling (PL), where the predictions of a network are used in order to assign a label to an unlabeled image. However, this practice exposes learning to confirmation bias. In this paper we propose Generalized Pseudo-Labeling (GPL), a simple and generic way to exploit negative pseudo-labels in consistency regularization, entailing minimal additional computational overhead and hyperpameter fine-tuning. GPL makes learning more robust by using the information that an image does not belong to a certain class, which is more abundant and reliable. We showcase GPL in the context of FixMatch. In the benchmark using only $40$ labels of the CIFAR-$10$ dataset, adding GPL on top of FixMatch improves the error rate from $7.93 \%$ to $6.58 \%$, and on CIFAR-$100$ with $2500$ labels, from $28.02 \%$ to $26.85 \%$.