Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency
Tom Monnier Matthew Fisher Alexei A. Efros Mathieu Aubry
Official PyTorch implementation of the UNICORN system introduced in Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency. Check out our webpage for video results!
If you find this code useful, don't forget to star the repo ⭐ and cite the paper:
@article{monnier2022unicorn,
title={{Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance
Consistency}},
author={Monnier, Tom and Fisher, Matthew and Efros, Alexei A and Aubry, Mathieu},
journal={arXiv:2204.10310 [cs]},
year={2022},
}
conda env create -f environment.yml
conda activate unicorn
Optional: some monitoring routines are implemented, you can use them by specifying your
visdom port in the config file. You will need to install visdom
from source beforehand
git clone https://github.com/facebookresearch/visdom
cd visdom && pip install -e .
bash scripts/download_data.sh
This command will download one of the following datasets:
ShapeNet NMR
: paper / NMR paper / dataset (33Go, thanks to the DVR team for hosting the data)CUB-200
: paper / webpage / dataset (1Go)Pascal3D+ Cars
: paper / webpage (including ftp download link, 7.5Go) / UCMR annotations (thanks to the UCMR team for releasing them)CompCars
: paper / webpage / dataset (12Go, thanks to the GIRAFFE team for hosting the data)LSUN
: paper / webpage / horse dataset (69Go) / moto dataset (42Go)
bash scripts/download_model.sh
This command will download one of the following models:
car.pkl
trained on CompCars: gdrive linkbird.pkl
trained on CUB-200: gdrive linkmoto.pkl
trained on LSUN Motorbike: gdrive linkhorse.pkl
trained on LSUN Horse: gdrive linksn_*.pkl
trained on each ShapeNet category: airplane, bench, cabinet, car, chair, display, lamp, phone, rifle, sofa, speaker, table, vessel
NB: it may happen that gdown
hangs, if so you can download them manually with the
gdrive links and move them to the models
folder.
You first need to download the car model (see above), then launch:
cuda=gpu_id model=car.pkl input=demo ./scripts/reconstruct.sh
where:
gpu_id
is a target cuda device id,car.pkl
corresponds to a pretrained model,demo
is a folder containing the target images.
It will create a folder demo_rec
containing the reconstructed meshes (.obj format + gif
visualizations).
To launch a training from scratch, run:
cuda=gpu_id config=filename.yml tag=run_tag ./scripts/pipeline.sh
where:
gpu_id
is a target cuda device id,filename.yml
is a YAML config located inconfigs
folder,run_tag
is a tag for the experiment.
Results are saved at runs/${DATASET}/${DATE}_${run_tag}
where DATASET
is the dataset name
specified in filename.yml
and DATE
is the current date in mmdd
format. Some training
visual results like reconstruction examples will be saved. Available configs are:
sn/*.yml
for each ShapeNet categorycar.yml
for CompCars datasetcub.yml
for CUB-200 datasethorse.yml
for LSUN Horse datasetmoto.yml
for LSUN Motorbike datasetp3d_car.yml
for Pascal3D+ Car dataset
If you want to learn a model for a custom object category, here are the key things you need to do:
- put your images in a
custom_name
folder inside thedatasets
folder - write a config
custom.yml
withcustom_name
asdataset.name
and move it to theconfigs
folder: as a rule of thumb for the progressive conditioning milestones, put the number of epochs corresponding to 500k iterations for each stage - launch training with:
cuda=gpu_id config=custom.yml tag=custom_run_tag ./scripts/pipeline.sh
If you like this project, check out related works from our group:
- Loiseau et al. - Representing Shape Collections with Alignment-Aware Linear Models (3DV 2021)
- Monnier et al. - Unsupervised Layered Image Decomposition into Object Prototypes (ICCV 2021)
- Monnier et al. - Deep Transformation Invariant Clustering (NeurIPS 2020)
- Deprelle et al. - Learning elementary structures for 3D shape generation and matching (NeurIPS 2019)
- Groueix et al. - AtlasNet: A Papier-Mache Approach to Learning 3D Surface Generation (CVPR 2018)