Learning Neural Light Fields with Ray-Space Embedding
Website | Paper | Data | Results
This repository contains a pytorch-lightning implementation for the paper Learning Neural Light Fields with Ray-Space Embedding. The entirety of neural-light-fields is licensed under the MIT license. The design of this project was inspired by nerf_pl.
Table of contents
Installation
To set up an environment using conda, with all of the appropriate Python dependencies, run
conda env create -f environment.yml
Next, to build the NSVF voxel octree intersection package, run
python setup.py build_ext --inplace
Datasets
We have consolidated all of the datasets we make use of here. Note that we have made no modifications to the original datasets, they are simply collected here for convenience.
In order to use one of the zipped datasets, unzip it and place it within the data/
folder.
Training
Stanford Lightfield Training
To train a neural light field on the Stanford Light Field Dataset, run
python main.py experiment=stanford_lf experiment/dataset=stanford_<scene>
You can change the model / embedding network with
python main.py experiment=stanford_lf experiment/model=stanford_<affine/feature/no_embed>
Shiny Dense Lightfield Training
To train a neural light field on the Shiny Dataset (CD or Lab scenes), run
python main.py experiment=shiny_lf_dense experiment.dataset.collection=<scene>
LLFF Subdivided Lightfield Training
To train a subdivided neural light field on NeRF's Real Forward Facing Dataset, run
python main.py experiment=llff_subdivided experiment.dataset.collection=<scene> experiment.model.subdivision.max_hits=<num_subdivisions>
Shiny Subdivided Lightfield Training
To train a subdivided neural light field on the Shiny dataset, run
python main.py experiment=shiny_subdivided experiment.dataset.collection=<scene> experiment.model.subdivision.max_hits=<num_subdivisions>
We use a slightly different configuration for the denser CD and Lab sequences from the Shiny dataset (larger batch size). For these sequences, run
python main.py experiment=shiny_subdivided_dense experiment.dataset.collection=<scene> experiment.model.subdivision.max_hits=<num_subdivisions>
Testing
Testing is performed automatically during training with frequency dictated by experiment.training.test_every
. Results are written by default to logs/<experiment_name>/val_images/<epoch>
. You can also manually trigger testing by running
python main.py ... <model_settings> ... experiment.test_only=True
In this case, the test set predictions and ground truth will be written out to <log_dir>/testset
Rendering
Rendering is performed automatically during training with frequency dictated by experiment.training.render_every
. Individual video frames are written by default to logs/<experiment_name>/val_videos/<epoch>
. You can also manually trigger rendering by running
python main.py ... <model_settings> ... experiment.render_only=True
In this case, the individual rendered frames will be written out to <log_dir>/render
Results
We provide all of the predicted images that we use to compute our evaluation metrics here.
Evaluation
You will need to create a new conda environment to run the evaluation code, since it relies on a different version of CUDA / tensorflow
cd baselines/third_party/evaluation
conda env create -f environment.yml
For structured light field predicitons, where your directory containing image predictions from the model has all images, including predictions on training images (e.g. from our modified X-Fields codebase), run
python run_evaluation.py --mode lightfield --gt_dir <gt_dir> --pred_dir <pred_dir> --out_dir <out_dir> --metrics_file <out_metrics_file>
When you have only heldout images, and ground truth / predictions are in the same directory (e.g. our method and NeRF), run
python run_evaluation.py --mode same_dir --gt_dir <gt_dir> --out_dir <out_dir> --metrics_file <out_metrics_file>
Citation
@inproceedings{attal2022learning,
author = {Benjamin Attal and Jia-Bin Huang and Michael Zollh{\"o}fer and Johannes Kopf and Changil Kim},
title = {Learning Neural Light Fields with Ray-Space Embedding Networks},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}