Deep Sensor Fusion Based On Frustum Point Single Shot Multibox Detector For 3D Object Detection
Yu Wang, Ye Zhang, Shaohua Zhai, Hao Chen, Shaoqi Shi, Gang Wang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:09:13
We present a deep sensor fusion method based on frustum point single shot multibox detector (PointSSD) for autonomous driving scenarios. The proposed method solves the problem of precision degradation in frustum PointNets (F-PointNet) caused by relying heavily on 2D detection and making insufficient use of RGB information. The method mainly consists of two subnetworks: pyramid segmentation network (PSNet) and PointSSD. The proposed PSNet uses a novel architecture capable of performing semantic segmentation on RGB information to generate high quality image semantic information. Using these image semantic information, point cloud semantic information is obtained through projection and is then fused with raw 3D spatial features by deep fusion. The fusion results are processed by PointSSD, which is proposed for classification and bounding box regression. Evaluated on the KITTI dataset, our method is superior to other methods in 3D classification and 3D localization. In addition, our method guarantees robustness to 2D false detections.