Rubikplayer / 3D-CODED

Pytorch Implementation for the project : 3D-CODED : 3D Correspondences by Deep Deformation"

Home Page:http://imagine.enpc.fr/~groueixt/3D-CODED/index.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

🚀 Major upgrade 🚀 : Migration to Pytorch v1 and Python 3.7. The code is now much more generic and easy to install.

3D-CODED : 3D Correspondences by Deep Deformation 📃

This repository contains the source codes for the paper 3D-CODED : 3D Correspondences by Deep Deformation. The task is to put 2 meshes in point-wise correspondence. Below, given 2 humans scans with holes, the reconstruction are in correspondence (suggested by color).

Citing this work

If you find this work useful in your research, please consider citing:

@inproceedings{groueix2018b,
          title = {3D-CODED : 3D Correspondences by Deep Deformation},
          author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G. and Russell, Bryan and Aubry, Mathieu},
          booktitle = {ECCV},
          year = 2018}
        }

Project Page

The project page is available http://imagine.enpc.fr/~groueixt/3D-CODED/

Install 👷

This implementation uses Pytorch.

git clone git@github.com:ThibaultGROUEIX/3D-CODED.git ## Download the repo
conda create --name pytorch-atlasnet python=3.7 ## Create python env
source activate pytorch-atlasnet
pip install pandas visdom trimesh sklearn
conda install pytorch torchvision -c pytorch # or from sources if you prefer
# you're done ! Congrats :)

Tested on 11/18 with pytorch 0.4.1 (py37_py36_py35_py27__9.0.176_7.1.2_2) and [latest source](

Build chamfer distance

source activate pytorch-atlasnet
cd 3D-CODED/extension
python setup.py install

Using the Trained models 🚆

The trained models and some corresponding results are also available online :

On the demo meshes

Require 3 GB of RAM on the GPU and 17 sec to run (Titan X Pascal).

cd trained_models; ./download_models.sh; cd .. # download the trained models
cd data; ./download_template.sh; cd .. # download the template
python inference/correspondences.py

This script takes as input 2 meshes from data and compute correspondences in results. Reconstruction are saved in data

It should look like :

  • Initial guesses for example0 and example1:

  • Final reconstruction for example0 and example1:

On your own meshes

You need to make sure your meshes are preprocessed correctly :

  • The meshes are loaded with Trimesh, which should support a bunch of formats, but I only tested .ply files. Good converters include Assimp and Pymesh.

  • The trunk axis is the Y axis (visualize your mesh against the mesh in data to make sure they are normalized in the same way).

  • the scale should be about 1.7 for a standing human (meaning the unit for the point cloud is the cm). You can automatically scale them with the flag --scale 1

Options

'--HR', type=int, default=1, help='Use high Resolution template for better precision in the nearest neighbor step ?'
'--nepoch', type=int, default=3000, help='number of epochs to train for during the regression step'
'--model', type=str, default = 'trained_models/sup_human_network_last.pth',  help='your path to the trained model'
'--inputA', type=str, default =  "data/example_0.ply",  help='your path to mesh 0'
'--inputB', type=str, default =  "data/example_1.ply",  help='your path to mesh 1'
'--num_points', type=int, default = 6890,  help='number of points fed to poitnet'
'--num_angles', type=int, default = 100,  help='number of angle in the search of optimal reconstruction. Set to 1, if you mesh are already facing the cannonical 				direction as in data/example_1.ply'
'--env', type=str, default="CODED", help='visdom environment'
'--clean', type=int, default=0, help='if 1, remove points that dont belong to any edges'
'--scale', type=int, default=0, help='if 1, scale input mesh to have same volume as the template'
'--project_on_target', type=int, default=0, help='if 1, projects predicted correspondences point on target mesh'

Failure modes instruction : ⚠️

  • Sometimes the reconstruction is flipped, which break the correspondences. In the easiest case where you meshes are registered in the same orientation, you can just fix this angle in reconstruct.py line 86, to avoid the flipping problem. Also note from this line that the angle search only looks in [-90°,+90°].

  • Check the presence of lonely outliers that break the Pointnet encoder. You could try to remove them with the --clean flag.

Last comments

  • If you want to use inference/correspondences.py to process a hole dataset, like FAUST test set, make sure you don't load the same network in memory every time you compute correspondences between two meshes (which will happen with the naive and simplest way of doing it by calling inference/correspondences.py iteratively on all the pairs). A example of bad practice is in ./auxiliary/script.sh, for the FAUST inter challenge. Good luck :-)

Training the autoencoder

Data

The dataset can't be shared because of copyrights issues. Since the generation process of the dataset is quite heavy, it has it's own README in data/README.md. Brace yourselve :-)

Install Pymesh

Follow the specific repo instruction here.

Pymesh is my favorite Geometry Processing Library for Python, it's developed by an Adobe researcher : Qingnan Zhou. It can be tricky to set up. Trimesh is good alternative but requires a few code edits in this case.

Options

'--batchSize', type=int, default=32, help='input batch size'
'--workers', type=int, help='number of data loading workers', default=8
'--nepoch', type=int, default=75, help='number of epochs to train for'
'--model', type=str, default='', help='optional reload model path'
'--env', type=str, default="unsup-symcorrect-ratio", help='visdom environment'
'--laplace', type=int, default=0, help='regularize towords 0 curvature, or template curvature'

Now you can start training

  • First launch a visdom server :
python -m visdom.server -p 8888
  • Launch the training. Check out all the options in ./training/train_sup.py .
export CUDA_VISIBLE_DEVICES=0 #whichever you want
source activate pytorch-atlasnet
git pull
env=3D-CODED
python ./training/train_sup.py --env $env  |& tee ${env}.txt

visdom

  • Timings, results, memory requirements
Method Faust euclidean error in cm GPU memory Time by epoch⁽²⁾
train_sup.py 2.878 TODO TODO
train_unsup.py 4.883 TODO TODO

⁽²⁾this is only an estimate, the code is not optimised

Acknowledgement

License

MIT

Cool Contributions

  • Zhongshi Jiang applying trained model on a monster model 👹 (left: original , right: reconstruction)

visdom

Analytics

About

Pytorch Implementation for the project : 3D-CODED : 3D Correspondences by Deep Deformation"

http://imagine.enpc.fr/~groueixt/3D-CODED/index.html


Languages

Language:Python 81.5%Language:Shell 12.4%Language:Cuda 5.2%Language:C++ 0.9%