![](https://private-user-images.githubusercontent.com/51731102/319220298-7f3dd959-9b97-481e-8c13-45abbc2b712d.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA2OTgxNDMsIm5iZiI6MTcyMDY5Nzg0MywicGF0aCI6Ii81MTczMTEwMi8zMTkyMjAyOTgtN2YzZGQ5NTktOWI5Ny00ODFlLThjMTMtNDVhYmJjMmI3MTJkLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzExVDExMzcyM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTYwZWU0NjlkMmJjMzE0MzZhNzJkZmI3ODUxYjFkZGE4NDliODk5NzhlODAxZjc3MjNiMDk4YjYyOTgwYmI4ZDUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.31WXzO8HzDUV6In1WU0PHNSWYaofnBecsrmBcfGjlNA)
LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis
Zehan Zheng, Fan Lu, Weiyi Xue, Guang Chenβ , Changjun Jiang (β Corresponding author)
CVPR 2024
This repository is the official PyTorch implementation for LiDAR4D.
![](https://private-user-images.githubusercontent.com/51731102/319222348-e23640bf-bd92-4ee0-88b4-375faf8c9b4d.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA2OTgxNDMsIm5iZiI6MTcyMDY5Nzg0MywicGF0aCI6Ii81MTczMTEwMi8zMTkyMjIzNDgtZTIzNjQwYmYtYmQ5Mi00ZWUwLTg4YjQtMzc1ZmFmOGM5YjRkLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzExVDExMzcyM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTIwODNhYjVlMjkzMWRlYTliMmM1YTQwZTQxNWY5NzFkMjZiZjgyYmVlMjMxNGUxMDFiMmUyZWRhNjg3ZTYyMTEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.5_Hwfxrlm2iQvw3qv8T2vsIagQDpAeD76uQ7ez6WlpM)
2023-4-13:π We update U-Net of LiDAR4D for better ray-drop refinement.
2023-4-5:π Code of LiDAR4D is released.
2023-4-4:π₯ You can reach the preprint paper on arXiv as well as the project page.
2023-2-27:π Our paper is accepted by CVPR 2024.
![](https://private-user-images.githubusercontent.com/51731102/320004665-42083b63-2459-4eb9-bb8f-651eca0a1148.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA2OTgxNDMsIm5iZiI6MTcyMDY5Nzg0MywicGF0aCI6Ii81MTczMTEwMi8zMjAwMDQ2NjUtNDIwODNiNjMtMjQ1OS00ZWI5LWJiOGYtNjUxZWNhMGExMTQ4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzExVDExMzcyM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWQyM2MwNDJjNTdjMWMxMTQwZTMyMDAwZjI0OGRjNDJmZjM0NzkzM2Y1YzUyYWQ2MjJjMzcyYTQ1NGE1Y2M5MDgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.hNTu3ip8Bsy_8t5hRdsDzbEQ1YjnAjM6zAVUKvKcctU)
LiDAR4D is a differentiable LiDAR-only framework for novel space-time LiDAR view synthesis, which reconstructs dynamic driving scenarios and generates realistic LiDAR point clouds end-to-end. It adopts 4D hybrid neural representations and motion priors derived from point clouds for geometry-aware and time-consistent large-scale scene reconstruction.
git clone https://github.com/ispc-lab/LiDAR4D.git
cd LiDAR4D
conda create -n lidar4d python=3.9
conda activate lidar4d
# PyTorch
# CUDA 12.1
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
# CUDA 11.8
# pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
# CUDA <= 11.7
# pip install torch==2.0.0 torchvision torchaudio
# Dependencies
pip install -r requirements.txt
# Local compile for tiny-cuda-nn
git clone --recursive https://github.com/nvlabs/tiny-cuda-nn
cd tiny-cuda-nn/bindings/torch
python setup.py install
# compile packages in utils
cd utils/chamfer3D
python setup.py install
KITTI-360 dataset (Download)
We use sequence00 (2013_05_28_drive_0000_sync
) for experiments in our paper.
![](https://private-user-images.githubusercontent.com/51731102/320007853-c9f5d5c5-ac48-4d54-8109-9a8b745bbca0.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjA2OTgxNDMsIm5iZiI6MTcyMDY5Nzg0MywicGF0aCI6Ii81MTczMTEwMi8zMjAwMDc4NTMtYzlmNWQ1YzUtYWM0OC00ZDU0LTgxMDktOWE4Yjc0NWJiY2EwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzExVDExMzcyM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTg5NGY2NDk0NmI1ODk4ZTJjZDhiMDhmYzRhM2U1OGU1NWVkZWZjNDQ0M2JjYWVjYTg4ZjgxYTJmYWE0MjUyMGQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.zBGpb0Gi2OHXEDyDdtjytB_RTeQ9Pc-kQckjRQXUaVg)
Download KITTI-360 dataset (2D images are not needed) and put them into data/kitti360
.
(or use symlinks: ln -s DATA_ROOT/KITTI-360 ./data/kitti360/
).
The folder tree is as follows:
data
βββ kitti360
βββ KITTI-360
βββ calibration
βββ data_3d_raw
βββ data_poses
Next, run KITTI-360 dataset preprocessing: (set DATASET
and SEQ_ID
)
bash preprocess_data.sh
After preprocessing, your folder structure should look like this:
configs
βββ kitti360_{sequence_id}.txt
data
βββ kitti360
βββ KITTI-360
β βββ calibration
β βββ data_3d_raw
β βββ data_poses
βββ train
βββ transforms_{sequence_id}test.json
βββ transforms_{sequence_id}train.json
βββ transforms_{sequence_id}val.json
Set corresponding sequence config path in --config
and you can modify logging file path in --workspace
. Remember to set available GPU ID in CUDA_VISIBLE_DEVICES
.
Run the following command:
# KITTI-360
bash run_kitti_lidar4d.sh
We sincerely appreciate the great contribution of the following works:
Please use the following citation if you find our repo or paper helps:
@inproceedings{zheng2024lidar4d,
title = {LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis},
author = {Zheng, Zehan and Lu, Fan and Xue, Weiyi and Chen, Guang and Jiang, Changjun},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2024}
}
All code within this repository is under Apache License 2.0.