Sara Rojas1, Jesus Zarzar1, Juan C. Pérez1, Artsiom Sanakoyeu2, Ali Thabet2, Albert Pumarola2, Bernard Ghanem1
KAUST1, Meta Research2
[TL;DR] We propose Re-ReND for efficient real-time rendering of pre-trained Neural Radiance Fields (NeRFs) on resource-limited devices. Re-ReND achieves this by distilling the NeRF representation into a mesh of learned densities and a set of matrices representing the learned light field, which can be queried using inexpensive matrix multiplications.
This repository contains the official implementation of Re-ReND, for rendering NeRFs in real-time in devices such as AR/VR headsets, mobiles and tablets.
Training Re-ReND:
Rendering a NeRF using Re-ReND.git clone https://github.com/sararoma95/Re-ReND.git && cd Re-ReND
conda env create -f environment.yml
-
We extract 10k images and a mesh for each scene of the Blender Synthetic dataset and the Tanks & Temples dataset from MipNeRF and NeRF++, respectively. You can download them here. Note that each scene is large (~120GB).
-
Then, you'll need to download the datasets from the NeRF official Google Drive and Tanks & Temples. Please download and unzip nerf_synthetic.zip and tanks_and_temples.zip.
-
Finally, put datasets from 2. inside of
data
folder (to create itmkdir data
). Place the scene from 1. in 2.. E.g./data/nerf_synthetic/chair/logs_exp_lev_0.0_thr_49.0/blender_paper_chair_*.pt
and/data/nerf_synthetic/chair/meshes/lev_0.0_thr_49.0_blender_paper_chair.ply
.Note that in
logs_exp_lev_0.0_thr_49.0
folderlev
andthr
means the mesh's level set and threshold used. To obtain that information, you can checkconfigs/scene.txt
or when downloding, you will see ameshes
folder with a mesh file named likelev_0.0_thr_49.0_blender_paper_chair.ply
.
We train on an A100 GPU for 2.5 days to reach 380k iterations for synthetic scenes and 1 day to reach 150k iters for Tanks & Temples scenes.
python main.py --config configs/chair.txt --train
If you want to track your experiments, use --with_wandb
In case of GPU OOM, try to reduce --batch_size
In case of CPU OOM, try to reduce --num_files
python main.py --config configs/chair.txt --render_only
python main.py --config configs/chair.txt --export_textures
python main.py --config configs/chair.txt --compute_metrics
The viewer code is provided in this repo, as four .html files for two types of datasets.
The instructions to use are inside the folder viewer
.
Here you can download the pretrained models.
The scripts to extract the data are also provided for this specific implementations. Feel free to recreate the data by yourself.
The scrips to use are inside the folder extract_data
.
@article{rojas2023rerend,
title={{R}e-{R}e{ND}: {R}eal-time {R}endering of {N}e{RF}s across {D}evices},
author={Rojas, Sara and Zarzar, Jesus and {P{\'e}rez}, Juan C. and Sanakoyeu, Artsiom and Thabet, Ali and Pumarola, Albert and Ghanem, Bernard},
journal={arXiv preprint arXiv:2303.08717},
year={2023}
}