A GENERATIVE SELF-ENSEMBLE APPROACH TO SIMULATED+UNSUPERVISED LEARNING
Yu Mitsuzumi, Go Irie, Akisato Kimura, Atsushi Nakazawa
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 11:53
In this paper, we consider Simulated and Unsupervised (S+U) learning which is a problem of learning from labeled synthetic and unlabeled real images. After translating the synthetic images to real ones, existing S+U learning methods use only the labeled synthetic images for training a predictor (e.g., a regression function) and ignore the target real images, which may result in unsatisfactory prediction performance. Our approach utilizes both synthetic and real images to train the predictor. The main idea of ours is to involve a self-ensemble learning framework into S+U learning. More specifically, we require the prediction results for an unlabeled real image to be consistent between ``teacher" and ``student" predictors, even after some perturbations are added to the image. Furthermore, aiming at generating diverse perturbations along the underlying data manifold, we introduce one-to-many image translation between synthetic and real images. Evaluation experiments on an appearance-based gaze estimation task demonstrate that the proposed ideas can improve the prediction accuracy and our full method can outperform existing S+U learning methods.