jxu21 / Birds-eye-view-Perception

Awesome BEV perception research and cookbook for all level audience in autonomous diriving

Home Page:https://arxiv.org/abs/2209.05324

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Bird's-eye-view (BEV) Perception: A Survey and Collection

Awesome BEV perception papers and toolbox for achieving state-of-the-arts performance.

Table of contents

Introduction

This repo is associated with the survey paper "Delving into the Devils of Bird’s-eye-view Perception: A Review, Evaluation and Recipe", which provides an up-to-date literature survey for BEV percption and an open source BEV toolbox based on PyTorch. We also introduce BEV algorithm family, incluidng follow-up work on BEV percepton such as GAPretrain and FocalDistiller(paper coming soon). We hope this repo can not only be a good starting point for new beginners but also help current researchers in the BEV perception community.

If you find some work popular enough to be cited below, shoot us email or simply open a PR!

Major Features

  • SOTA BEV Algorithm Family
    We include important follow-up works of BEVFormer/BEVDet/BEVDepth in different aspects, ranging from plug-and-play tricks(FocalDistiller, paper coming soon) to pre-training distillation(GAPretrain). More detail of each paper can be found in each README.md file under here.
  • Convenient BEVPerception Toolbox
    We integrate bag of tricks in the BEV toolbox that help us achieve 1st in the camera-based detection track of the Waymo Open Challenge 2022, which can be used indedependly or as a plug-in for mmdet3d and detectron2. Moreover, we provide a suitable playground for new-beginners in this area, including hands-on tutorial and small-scale dataset (1/5 WOD in kitti format) to validate idea. More detail can be found here.
Bag of Tricks
Multiple View Data Augmentation BEV encoder Loss & Heads family Post-Process
TBA TBA
  • Test-time Augmentation
  • Weighted Box Fusion
  • Two-stage Ensemble
  • Up-to-date Literature Survey for BEV Perception
    We summarize important methods in recent years about BEV perception including different modalities (camera, LIDAR, Fusion) and tasks (Detection, Segmentation, Occupancy). More detail of the survey paper list can be found here.

What's New

[2023/04/06]: Two new paper GAPretrain and FocalDistiller are comming soon with official implementation.

[2022/10/13]: v0.1 was released.

  • Integrate some practical data augmentation methods for BEV camera-based 3D detection in the toolbox.
  • Offer a pipeline to process the Waymo dataset (camera-based 3D detection).
  • Release a baseline (with config) for Waymo dataset and also 1/5 Waymo dataset in Kitti format.

Please refer to changelog.md for details and release history of the toolbox code.

BEV Algorithm Family

The BEV algorithm family includes follow-up works of BEVFormer in different aspects, ranging from plug-and-play tricks to pre-training distillation. All paper summary is under nuscenes_playground alongwith official implementation, check it out!

BEV Toolbox

The BEV toolbox provides useful recipe for BEV camera-based 3D object detection, including solid data augmentation strategies, efficient BEV encoder design, loss function family, useful test-time augmentation, ensemble policy, and so on. Please refer to bev_toolbox/README.md for more details.

Literature Survey

The general picture of BEV perception at a glance, where consists of three sub-parts based on the input modality. BEV perception is a general task built on top of a series of fundamental tasks. For better completeness of the whole perception algorithms in autonomous driving, we list other topics as well. More detail can be found in the survey paper.

We have summarized important datasets and methods in recent years about BEV perception in academia and also different roadmaps used in industry.

We have also summarized some conventional methods for different tasks.

License and Citation

This project is released under the Apache 2.0 license.

If you find this project useful in your research, please consider cite:

@article{li2022bevsurvey,
  title={Delving into the Devils of Bird's-eye-view Perception: A Review, Evaluation and Recipe},
  author={Li, Hongyang and Sima, Chonghao and Dai, Jifeng and Wang, Wenhai and Lu, Lewei and Wang, Huijie and Xie, Enze and Li, Zhiqi and Deng, Hanming and Tian, Hao and Zhu, Xizhou and Chen, Li and Gao, Yulu and Geng, Xiangwei and Zeng, Jia and Li, Yang and Yang, Jiazhi and Jia, Xiaosong and Yu, Bohan and Qiao, Yu and Lin, Dahua and Liu, Si and Yan, Junchi and Shi, Jianping and Luo, Ping},
  journal={arXiv preprint arXiv:2209.05324},
  year={2022}
}
@misc{bevtoolbox2022,
  title={{BEVPerceptionx-Survey-Recipe} toolbox for general BEV perception},
  author={BEV-Toolbox Contributors},
  howpublished={\url{https://github.com/OpenPerceptionX/BEVPerception-Survey-Recipe}},
  year={2022}
}

About

Awesome BEV perception research and cookbook for all level audience in autonomous diriving

https://arxiv.org/abs/2209.05324

License:Apache License 2.0


Languages

Language:Jupyter Notebook 74.4%Language:Python 25.6%Language:Shell 0.0%