nudlesoup / you2me

Inferring Body Pose in Egocentric Video via First and Second Person Interactions

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Bugs Fixed in model, dataloader & train file.

You2Me: Inferring Body Pose in Egocentric Video via First and Second Person Interactions (CVPR 2020)

report

Test

Please generate:

  • directory of homographies (see calc_homgraphy/README.md)
  • directory of openpose predictions
  • vocab.pkl (see vocab/build_vocab.py)

for your sample sequence.

Then run the following command:

python sample.py --vocab_path <path/to/sample_vocab.pkl> --output <path/to/output_dir> --encoder_path <path/to/trained/encoder.pth> --decoder_path <path/to/trained/decoder.pth> --upp

Change flag --upp to --low to test the lower body model.

Include flag --visualize to plot the predicted stick figures.

Train

Please generate

  • directory of homographies (see calc_homgraphy/README.md)
  • directory of openpose predictions
  • vocab.pkl (see vocab/build_vocab.py)
  • annotation.pkl (see vocab/build_annotation.py)

for your each of your training sequences.

Then run the following command:

python train.py --model_path <path/to/save/models> --vocab_path <path/to/train_vocab.pkl> --annotation_path <path/to/annotation.pkl> -upp

Change flag --upp to --low to train the lower body model.

License

CC-BY-NC 4.0. See the LICENSE file.

Citation

@article{ng2019you2me,
  title={You2Me: Inferring Body Pose in Egocentric Video via First and Second Person Interactions},
  author={Ng, Evonne and Xiang, Donglai and Joo, Hanbyul and Grauman, Kristen},
  journal={CVPR},
  year={2020}
}

About

Inferring Body Pose in Egocentric Video via First and Second Person Interactions

License:Other


Languages

Language:Python 77.5%Language:C++ 14.4%Language:MATLAB 6.0%Language:Shell 2.1%