wylighting / fast-synth

Public code release for our CVPR 2019 paper "Fast and Flexible Indoor Scene Synthesis via Deep Convolutional Generative Models"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Fast and Flexible Indoor Scene Synthesis via Deep Convolutional Generative Models

PyTorch code for our CVPR Paper [Fast and Flexible Indoor Scene Synthesis via Deep Convolutional Generative Models]

Requires PyTorch 0.4 to run, additional python library requirements could be found at /scene-synth/requirements.txt. Run

pip install -r requirements.txt

to install.

We trained our models on the SUNCG dataset. Due to an ongoing legal dispute, SUNCG is currently unavailable to download. As such, we feel that we should not provide anything derived from the dataset, including the pre-trained models, as well as several metadata files that our code rely on. We will update this page in the event that this situation changes in the future.

cat.py, loc.py, orient.py, dims.py contains the four components of our model respectively. fast_synth.py contains the main scene synthesis code. We will update with instructions on them as well.

We have a follow-up of this paper set to appear at SIGGRAPH 2019. In that paper, we introduce a high level "planning" module, formulated as a graph convolutional network, and uses the modules introduced here to fill out/"instantiate" low level details. Please refer to https://github.com/brownvc/planit for more details.

About

Public code release for our CVPR 2019 paper "Fast and Flexible Indoor Scene Synthesis via Deep Convolutional Generative Models"

License:Other


Languages

Language:Python 100.0%