Auratons / neural_rendering

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Neural Rerendering in the Wild

This repository is a fork of https://github.com/google/neural_rerendering_in_the_wild extended with codes for experiments from https://github.com/Auratons/master_thesis, targeting multiple datasets.

The repository contains git submodules, so either clone the repository with --recurse-submodules option or inside of the folder run git submodule init && git subbmodule update --recursive.

Repository structure

Top-level files are mostly taken from the upstream repository. The dvc folder is the main entrypoint to experiments. (The folder name is a remnant of a trial to use DVC which proven itself to be an unsuitable tool for datasets comprised of many small files.) The datasets folder should contain most of the data and their transformations. These are referenced from within the dvc folder. The models folder is target of trained networks. Finally, artwin, colmap and inloc folders contain transformation scripts for three datasets families used in the thesis, ARTwin Dataset, Image Matching Challenge data, and InLoc Dataset, respectively.

Dependencies & Runtime

The runtime for the project was Slurm-based compute cluster with graphical capabilities operated by Czech Institute of Informatics, Robotics and Cybernetics. Thus, in dvc/scripts folder, there are mentions of SBATCH directives meant as Slurm scheduler limits and compute requirements for various workloads. In the folder, there are mentioned also other projects: InLoc with codes for InLoc dataset transformations as well as InLoc algorithm, Splatter Renderer, and Ray Marcher Renderer.

In the scripts are mentioned also binaries time as gnu-time, cpulimit and Python's yq (accepts -r option from underlying jq).

Data

Raw data should be stored in datasets/raw/inloc and datasets/raw/imc/<grand_place_brussels|hagia_sophia_interior|pantheon_exterior>, the ARTwin dataset is not open to public, so it resided elsewhere on the cluster storage. Other dependencies of the rerendering model are (from the upstream project's README):

Running codes

Prepare conda environment from environment.yml, go to specific subfolder of dvc/pipeline-* depending on which dataset should be targeted and run sbatch ../scripts/<SCRIPT_NAME> <CONFIG_NAME>. Scripts always read params.yaml file and pick proper configuration key <PREFIX>_<CONFIG_NAME>, where <PREFIX> is first part of each top-level YAML key in the parameters file and it varies across scripts. To find out what the prefix is for given script file, please refer to that script contents.

About

License:Apache License 2.0


Languages

Language:Python 79.4%Language:Shell 15.6%Language:MATLAB 5.0%