ispc-lab / LiDAR4D

πŸ’« [CVPR 2024] LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis

Home Page:https://dyfcalid.github.io/LiDAR4D

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis

Zehan Zheng, Fan Lu, Weiyi Xue, Guang Chen†, Changjun Jiang († Corresponding author)
CVPR 2024

Paper (arXiv) | Paper (CVPR) | Project Page | Video | Poster | Slides

This repository is the official PyTorch implementation for LiDAR4D.

Table of Contents
  1. Changelog
  2. Demo
  3. Introduction
  4. Getting started
  5. Results
  6. Simulation
  7. Citation

Changelog

2024-6-1:πŸ•ΉοΈ We release the simulator for easier rendering and manipulation. Happy Children's Day and Have Fun!
2024-5-4:πŸ“ˆ We update flow fields and improve temporal interpolation.
2024-4-13:πŸ“ˆ We update U-Net of LiDAR4D for better ray-drop refinement.
2024-4-5:πŸš€ Code of LiDAR4D is released.
2024-4-4:πŸ”₯ You can reach the preprint paper on arXiv as well as the project page.
2024-2-27:πŸŽ‰ Our paper is accepted by CVPR 2024.

Demo

LiDAR4D_demo.mp4

Introduction

LiDAR4D is a differentiable LiDAR-only framework for novel space-time LiDAR view synthesis, which reconstructs dynamic driving scenarios and generates realistic LiDAR point clouds end-to-end. It adopts 4D hybrid neural representations and motion priors derived from point clouds for geometry-aware and time-consistent large-scale scene reconstruction.

Getting started

πŸ› οΈ Installation

git clone https://github.com/ispc-lab/LiDAR4D.git
cd LiDAR4D

conda create -n lidar4d python=3.9
conda activate lidar4d

# PyTorch
# CUDA 12.1
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
# CUDA 11.8
# pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
# CUDA <= 11.7
# pip install torch==2.0.0 torchvision torchaudio

# Dependencies
pip install -r requirements.txt

# Local compile for tiny-cuda-nn
git clone --recursive https://github.com/nvlabs/tiny-cuda-nn
cd tiny-cuda-nn/bindings/torch
python setup.py install

# compile packages in utils
cd utils/chamfer3D
python setup.py install

πŸ“ Dataset

KITTI-360 dataset (Download)

We use sequence00 (2013_05_28_drive_0000_sync) for experiments in our paper.

Download KITTI-360 dataset (2D images are not needed) and put them into data/kitti360.
(or use symlinks: ln -s DATA_ROOT/KITTI-360 ./data/kitti360/).
The folder tree is as follows:

data
└── kitti360
    └── KITTI-360
        β”œβ”€β”€ calibration
        β”œβ”€β”€ data_3d_raw
        └── data_poses

Next, run KITTI-360 dataset preprocessing: (set DATASET and SEQ_ID)

bash preprocess_data.sh

After preprocessing, your folder structure should look like this:

configs
β”œβ”€β”€ kitti360_{sequence_id}.txt
data
└── kitti360
    β”œβ”€β”€ KITTI-360
    β”‚   β”œβ”€β”€ calibration
    β”‚   β”œβ”€β”€ data_3d_raw
    β”‚   └── data_poses
    β”œβ”€β”€ train
    β”œβ”€β”€ transforms_{sequence_id}test.json
    β”œβ”€β”€ transforms_{sequence_id}train.json
    └── transforms_{sequence_id}val.json

πŸš€ Run LiDAR4D

Set corresponding sequence config path in --config and you can modify logging file path in --workspace. Remember to set available GPU ID in CUDA_VISIBLE_DEVICES.
Run the following command:

# KITTI-360
bash run_kitti_lidar4d.sh

πŸ“Š Results

KITTI-360 Dynamic Dataset (Sequences: 2350 4950 8120 10200 10750 11400)

Method Point Cloud Depth Intensity
CD↓ F-Score↑ RMSE↓ MedAE↓ LPIPS↓ SSIM↑ PSNR↑ RMSE↓ MedAE↓ LPIPS↓ SSIM↑ PSNR↑
LiDAR-NeRF 0.1438 0.9091 4.1753 0.0566 0.2797 0.6568 25.9878 0.1404 0.0443 0.3135 0.3831 17.1549
LiDAR4D (Ours) † 0.1002 0.9320 3.0589 0.0280 0.0689 0.8770 28.7477 0.0995 0.0262 0.1498 0.6561 20.0884

KITTI-360 Static Dataset (Sequences: 1538 1728 1908 3353)

Method Point Cloud Depth Intensity
CD↓ F-Score↑ RMSE↓ MedAE↓ LPIPS↓ SSIM↑ PSNR↑ RMSE↓ MedAE↓ LPIPS↓ SSIM↑ PSNR↑
LiDAR-NeRF 0.0923 0.9226 3.6801 0.0667 0.3523 0.6043 26.7663 0.1557 0.0549 0.4212 0.2768 16.1683
LiDAR4D (Ours) † 0.0834 0.9312 2.7413 0.0367 0.0995 0.8484 29.3359 0.1116 0.0335 0.1799 0.6120 19.0619

†: The latest results better than the paper.
Experiments are conducted on the NVIDIA 4090 GPU. Results may be subject to some variation and randomness.

πŸ•ΉοΈ Simulation

After reconstruction, you can use the simulator to render and manipulate LiDAR point clouds in the whole scenario. It supports dynamic scene re-play, novel LiDAR configurations (--fov_lidar, --H_lidar, --W_lidar) and novel trajectory (--shift_x, --shift_y, --shift_z).
We also provide a simple demo setting to transform LiDAR configurations from KITTI-360 to NuScenes, using --kitti2nus in the bash script.
Check the sequence config and corresponding workspace and model path (--ckpt).
Run the following command:

bash run_kitti_lidar4d_sim.sh

The results will be saved in the workspace folder.

Acknowledgement

We sincerely appreciate the great contribution of the following works:

Citation

If you find our repo or paper helpful, feel free to support us with a star 🌟 or use the following citation:

@inproceedings{zheng2024lidar4d,
  title     = {LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis},
  author    = {Zheng, Zehan and Lu, Fan and Xue, Weiyi and Chen, Guang and Jiang, Changjun},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2024}
  }

License

All code within this repository is under Apache License 2.0.

About

πŸ’« [CVPR 2024] LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis

https://dyfcalid.github.io/LiDAR4D

License:Apache License 2.0


Languages

Language:Python 95.5%Language:Cuda 3.4%Language:C++ 0.6%Language:Shell 0.5%