chihhuiho / VISPE

Code for Exploit Clues from Views: Self-Supervised and Regularized Learning for Multiview Object Recognition

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

View Invariant Stochastic Prototype Embeddings (VISPE)

Code for Exploit Clues from Views Self Supervised and Regularized Learning for 3D Object Recognition. This work is published in CVPR 2020. Please refer to the webpage (https://chihhuiho.github.io/vispe_web/) for more details.

Usage

Evaluate pretrained model

  1. Clone the project to directory DIR
git clone https://github.com/chihhuiho/VISPE.git
  1. Initiate the conda environment
conda env create -f environment.yml -n VISPE
conda activate VISPE
  1. Download the Modelnet dataset.
sh download.sh
  1. Download the pretrained model from here and place it in the "model" folder
  2. Evaluate the pretrained model
python main.py --load_pretrain --evaluate

Train your own model

  1. Run our code from scratch
python main.py

Citation

If you find this method useful in your research, please cite this article:

@InProceedings{Ho_2020_CVPR,
author = {Ho, Chih-Hui and Liu, Bo and Wu, Tz-Ying and Vasconcelos, Nuno},
title = {Exploit Clues From Views: Self-Supervised and Regularized Learning for Multiview Object Recognition},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

Ackowledgement

Please email to Chih-Hui (John) Ho (chh279@eng.ucsd.edu) if further issues are encountered.

About

Code for Exploit Clues from Views: Self-Supervised and Regularized Learning for Multiview Object Recognition


Languages

Language:Python 98.2%Language:Shell 1.8%