zhang0557kui / HandOccNet

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

HandOccNet: Occlusion-Robust 3D Hand Mesh Estimation Network

Introduction

This repository is the offical Pytorch implementation of HandOccNet: Occlusion-Robust 3D Hand Mesh Estimation Network (CVPR 2022). Below is the overall pipeline of HandOccNet. overall pipeline

Quick demo (update soon)

Directory

Root

The ${ROOT} is described as below.

${ROOT}  
|-- data  
|-- demo
|-- common  
|-- main  
|-- output  
  • data contains data loading codes and soft links to images and annotations directories.
  • demo contains demo codes.
  • common contains kernel codes for HandOccNet.
  • main contains high-level codes for training or testing the network.
  • output contains log, trained models, visualized outputs, and test result.

Data

You need to follow directory structure of the data as below.

${ROOT}  
|-- data  
|   |-- HO3D
|   |   |-- data
|   |   |   |-- train
|   |   |   |   |-- ABF10
|   |   |   |   |-- ......
|   |   |   |-- evaluation
|   |   |   |-- annotations
|   |   |   |   |-- HO3D_train_data.json
|   |   |   |   |-- HO3D_evaluation_data.json
|   |-- DEX_YCB
|   |   |-- data
|   |   |   |-- 20200709-subject-01
|   |   |   |-- ......
|   |   |   |-- annotations
|   |   |   |   |--DEX_YCB_s0_train_data.json
|   |   |   |   |--DEX_YCB_s0_test_data.json

Pytorch MANO layer

  • For the MANO layer, I used manopth. The repo is already included in common/utils/manopth.
  • Download MANO_RIGHT.pkl from here and place at common/utils/manopth/mano/models.

Output

You need to follow the directory structure of the output folder as below.

${ROOT}  
|-- output  
|   |-- log  
|   |-- model_dump  
|   |-- result  
|   |-- vis  
  • Creating output folder as soft link form is recommended instead of folder form because it would take large storage capacity.
  • log folder contains training log file.
  • model_dump folder contains saved checkpoints for each epoch.
  • result folder contains final estimation files generated in the testing stage.
  • vis folder contains visualized results.

Running HandOccNet

Start

  • Install PyTorch and Python >= 3.7.4 and run sh requirements.sh.
  • In the main/config.py, you can change settings of the model including dataset to use and input size and so on.

Train

In the main folder, set trainset in config.py (as 'HO3D' or 'DEX_YCB') and run

python train.py --gpu 0-3

to train HandOccNet on the GPU 0,1,2,3. --gpu 0,1,2,3 can be used instead of --gpu 0-3.

Test

Place trained model at the output/model_dump/.

In the main folder, set testset in config.py (as 'HO3D' or 'DEX_YCB') and run

python test.py --gpu 0-3 --test_epoch {test epoch}  

to test HandOccNet on the GPU 0,1,2,3 with {test epoch}th epoch trained model. --gpu 0,1,2,3 can be used instead of --gpu 0-3.

  • For the HO3D dataset, pred{test epoch}.zip will be generated in output/result folder. You can upload it to the codalab challenge and see the results.
  • Our trained model can be downloaded from here

Results

Here I report the performance of the HandOccNet.

Reference (update soon)

@InProceedings{Park_2022_CVPR_HandOccNet,  
author = {Park, JoonKyu and Oh, Yeonguk and Moon, Gyeongsik and Choi, Hongsuk and Lee, Kyoung Mu},  
title = {HandOccNet: Occlusion-Robust 3D Hand Mesh Estimation Network},  
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},  
year = {2022}  
}  

Acknowledgements

For this project, we relied on research codes from:

About


Languages

Language:Python 99.9%Language:Shell 0.1%