BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation

Zhijian Liu*, Haotian Tang*, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela L. Rus, Song Han
Massachusetts Institute of Technology
(* indicates equal contribution)

News

Waiting for more news.

Awards

No items found.

Competition Awards

No items found.

Abstract

Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features. However, the camera-to-LiDAR projection throws away the semantic density of camera features, hindering the effectiveness of such methods, especially for semantic-oriented tasks (such as 3D scene segmentation). In this paper, we break this deeply-rooted convention with BEVFusion, an efficient and generic multi-task multi-sensor fusion framework. It unifies multi-modal features in the shared bird's-eye view (BEV) representation space, which nicely preserves both geometric and semantic information. To achieve this, we diagnose and lift key efficiency bottlenecks in the view transformation with optimized BEV pooling, reducing latency by more than 40x. BEVFusion is fundamentally task-agnostic and seamlessly supports different 3D perception tasks with almost no architectural changes. It establishes the new state of the art on nuScenes, achieving 1.3% higher mAP and NDS on 3D object detection and 13.6% higher mIoU on BEV map segmentation, with 1.9x lower computation cost.

Video

Citation

@inproceedings{liu2022bevfusion,  

title={BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation},  

author={Liu, Zhijian and Tang, Haotian and Amini, Alexander and Yang, Xingyu and Mao, Huizi and Rus, Daniela and Han, Song},  

booktitle={IEEE International Conference on Robotics and Automation (ICRA)},  

year={2023}

}

Media

No media articles found.

Acknowledgment

We would like to thank Xuanyao Chen and Brady Zhou for their guidance on detection and segmentation evaluation, and Yingfei Liu and Tiancai Wang for their helpful discussions. This work was supported by National Science Foundation, Hyundai Motor, Qualcomm, NVIDIA and Apple. Zhijian Liu was partially supported by the Qualcomm Innovation Fellowship.

Team Members