sararoma95 / Re-ReND

Re-ReND: Real-time Rendering of NeRFs across Devices

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Re-ReND: Real-time Rendering of NeRFs across Devices [ICCV 2023 YAYYY]

Sara Rojas1, Jesus Zarzar1, Juan C. Pérez1, Artsiom Sanakoyeu2, Ali Thabet2, Albert Pumarola2, Bernard Ghanem1

KAUST1, Meta Research2

[TL;DR] We propose Re-ReND for efficient real-time rendering of pre-trained Neural Radiance Fields (NeRFs) on resource-limited devices. Re-ReND achieves this by distilling the NeRF representation into a mesh of learned densities and a set of matrices representing the learned light field, which can be queried using inexpensive matrix multiplications.

This repository contains the official implementation of Re-ReND, for rendering NeRFs in real-time in devices such as AR/VR headsets, mobiles and tablets.


Training Re-ReND:

Rendering a NeRF using Re-ReND.

Reproducing Our Results

0. Download the code

git clone https://github.com/sararoma95/Re-ReND.git && cd Re-ReND

1. Set up environment with Anaconda

conda env create -f environment.yml

2. Download data for Re-ReND

  1. We extract 10k images and a mesh for each scene of the Blender Synthetic dataset and the Tanks & Temples dataset from MipNeRF and NeRF++, respectively. You can download them here. Note that each scene is large (~120GB).

  2. Then, you'll need to download the datasets from the NeRF official Google Drive and Tanks & Temples. Please download and unzip nerf_synthetic.zip and tanks_and_temples.zip.

  3. Finally, put datasets from 2. inside of data folder (to create it mkdir data). Place the scene from 1. in 2.. E.g. /data/nerf_synthetic/chair/logs_exp_lev_0.0_thr_49.0/blender_paper_chair_*.pt and /data/nerf_synthetic/chair/meshes/lev_0.0_thr_49.0_blender_paper_chair.ply.

    Note that in logs_exp_lev_0.0_thr_49.0 folder lev and thr means the mesh's level set and threshold used. To obtain that information, you can check configs/scene.txt or when downloding, you will see a meshes folder with a mesh file named like lev_0.0_thr_49.0_blender_paper_chair.ply.

3. Training

We train on an A100 GPU for 2.5 days to reach 380k iterations for synthetic scenes and 1 day to reach 150k iters for Tanks & Temples scenes.

python main.py --config configs/chair.txt --train

If you want to track your experiments, use --with_wandb

In case of GPU OOM, try to reduce --batch_size

In case of CPU OOM, try to reduce --num_files

4. Evaluate before quantization (continuous)

python main.py --config configs/chair.txt --render_only

5. Export UVWB textures

python main.py --config configs/chair.txt --export_textures

6. Evaluate after quantization

python main.py --config configs/chair.txt --compute_metrics

7. Running the viewer

The viewer code is provided in this repo, as four .html files for two types of datasets. The instructions to use are inside the folder viewer.

Pretrained models

Here you can download the pretrained models.

Note: Create the data by yourself

The scripts to extract the data are also provided for this specific implementations. Feel free to recreate the data by yourself. The scrips to use are inside the folder extract_data.

Citation

@article{rojas2023rerend,
  title={{R}e-{R}e{ND}: {R}eal-time {R}endering of {N}e{RF}s across {D}evices},
  author={Rojas, Sara and Zarzar, Jesus and {P{\'e}rez}, Juan C. and Sanakoyeu, Artsiom and Thabet, Ali and Pumarola, Albert and Ghanem, Bernard},
  journal={arXiv preprint arXiv:2303.08717},
  year={2023}
}

About

Re-ReND: Real-time Rendering of NeRFs across Devices


Languages

Language:Python 68.8%Language:HTML 29.4%Language:Shell 1.9%