Welcome to the devkit of the nuScenes dataset.
- Changelog
- Dataset download
- Devkit setup
- Tutorial
- Frequently asked questions
- Object detection task
- Citation
- Apr. 30, 2019: Devkit v1.0.1: loosen PIP requirements, refine detection challenge, export 2d annotation script.
- Mar. 26, 2019: Full dataset, paper, & devkit v1.0.0 released. Support dropped for teaser data.
- Dec. 20, 2018: Initial evaluation code released. Devkit folders restructured, which breaks backward compatibility.
- Nov. 21, 2018: RADAR filtering and multi sweep aggregation.
- Oct. 4, 2018: Code to parse RADAR data released.
- Sep. 12, 2018: Devkit for teaser dataset released.
To download nuScenes you need to go to the Download page,
create an account and agree to the nuScenes Terms of Use.
After logging in you will see multiple archives.
For the devkit to work you will need to download all archives.
Please unpack the archives to the /data/sets/nuscenes
folder *without* overwriting folders that occur in multiple archives.
Eventually you should have the following folder structure:
/data/sets/nuscenes
samples - Sensor data for keyframes.
sweeps - Sensor data for intermediate frames.
maps - Large image files (~500 Gigapixel) that depict the drivable surface and sidewalks in the scene.
v1.0-* - JSON tables that include all the meta data and annotations. Each split (trainval, test, mini) is provided in a separate folder.
If you want to use another folder, specify the dataroot
parameter of the NuScenes class (see tutorial).
The devkit is tested for Python 3.6 and Python 3.7. To install Python, please check here.
Our devkit is available and can be installed via pip :
pip install nuscenes-devkit
For an advanced installation, see installation for detailed instructions.
To get started with the nuScenes devkit, please run the tutorial as an IPython notebook:
jupyter notebook $HOME/nuscenes-devkit/python-sdk/tutorial.ipynb
In case you want to avoid downloading and setting up the data, you can also take a look at the rendered notebook on nuScenes.org. To learn more about the dataset, go to nuScenes.org or take a look at the database schema and annotator instructions. The nuScenes paper provides detailed analysis of the dataset.
See FAQs.
For instructions related to the object detection task (results format, classes and evaluation metrics), please refer to this readme.
Please use the following citation when referencing nuScenes:
@article{nuscenes2019,
title={nuScenes: A multimodal dataset for autonomous driving},
author={Holger Caesar and Varun Bankiti and Alex H. Lang and Sourabh Vora and
Venice Erin Liong and Qiang Xu and Anush Krishnan and Yu Pan and
Giancarlo Baldan and Oscar Beijbom},
journal={arXiv preprint arXiv:1903.11027},
year={2019}
}