This is the official code for the paper Translating Images Into Maps presented at ICRA 2022.
The code was written using python 3.7. The following libraries are the minimal required for this repo:
pytorch
cv2
numpy
pickle
pyquaternion
shapely
lmdb
The official nuScenes data will be required to train the entire model. But for convenience, we provide the nuScenes mini dataset wrapped into lmdb's:
https://drive.google.com/drive/folders/1-1dZXeHnPiuqX-w8ruJHqfxBuMYMONRT?usp=sharing
The contents of this folder need to be unzipped and placed in a folder, create the folder as follows:
cd translating-images-into-maps
mkdir nuscenes_data
This contains the ground truth maps which have already been generated for the mini dataset, the input images and intrinsics.
To train a model with the configuration in the paper, simply run:
python train.py
If you find this code useful, please cite the following papers:
@inproceedings{saha2022translating,
title={Translating Images into Maps},
author={Saha, Avishkar and Mendez, Oscar and Russell, Chris and Bowden, Richard},
booktitle={2022 IEEE International Conference on Robotics and Automation (ICRA)},
year={2022},
organization={IEEE}
}
@inproceedings{saha2021enabling,
title={Enabling spatio-temporal aggregation in birds-eye-view vehicle estimation},
author={Saha, Avishkar and Mendez, Oscar and Russell, Chris and Bowden, Richard},
booktitle={2021 IEEE International Conference on Robotics and Automation (ICRA)},
pages={5133--5139},
year={2021},
organization={IEEE}
}