Towards Neural AR: Unsupervised Object Segmentation with 3D Scanned Model Through ReLaTIVE
Zackary Sin, Peter Ng, Hong Va Leong
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 10:00
Neural AR augments the reality with a neural model to enable direct manipulation of the reality. This demands object tracking for augmentation, but there is not much work as in typical AR setting. Since most computer vision models manipulate on an image level, effectively tracking a target object for manipulation manifests as a segmentation problem. To the best of our knowledge, there is a gap for a model to perform unsupervised object segmentation with a 3D scanned model useful for Neural AR in reducing the labor-intensive data annotation. We propose in this paper a CycleGAN-inspired model, Realistic Layered Training Image from Virtual Environment (ReLaTIVE), which only requires a user to 3D scan a target object as with typical AR. The ReLaTIVE generator then outputs the object’s mask for Neural AR. Without any annotated segmentation mask, it enables the generation of training samples with the 3D scanned model for learning to separate the foreground and background.