This is the code for the human skeleton tracker.
Install the following dependencies:
pip install torch==1.4.0 torchvision==0.5.0
pip install future scipy matplotlib pandas tensorboard
If you are in Ubuntu 20.04, you might need to do the following installation:
pip install grpcio==1.20.1
You can request access to the dataset here.
Directory structure:
H3.6m
|-- S1
|-- S2
|-- S3
|-- ...
`-- S10
An additional "Tests" folder containing a few tests performed before the actual data collection.
Put the all downloaded datasets in ./datasets directory or any other path. You can modify the file "opt.py" by changing the value of the "--root_path" parameter with your real dataset path.
You can download the following models according to your needs and place it in the root folder:
- input: 50, output: 25, heads: 4, goal_features: 3 -> model
All the running args are defined in opt.py. We use following commands to train on different datasets and representations.
Simple train with no added features:
python main_iri_handover_3d.py --kernel_size 10 --dct_n 20 --input_n 50 --output_n 40 --skip_rate 5 --batch_size 256 --test_batch_size 128 --in_features 27 --num_heads 5
Train with robot end effector data:
python main_iri_handover_3d.py --kernel_size 10 --dct_n 20 --input_n 50 --output_n 40 --skip_rate 5 --batch_size 256 --test_batch_size 128 --in_features 27 --num_heads 5 --goal_features 3 --fusion_model 1
Train with obstacle position:
python main_iri_handover_3d.py --kernel_size 10 --dct_n 20 --input_n 50 --output_n 40 --skip_rate 5 --batch_size 256 --test_batch_size 128 --in_features 27 --num_heads 5--obstacles_condition --fusion_model 1
Train with intention and phase classificator:
python main_iri_handover_3d.py --kernel_size 10 --dct_n 20 --input_n 50 --output_n 40 --skip_rate 5 --batch_size 256 --test_batch_size 128 --in_features 27 --num_heads 5 --fusion_model 1 --phase --intention
Training with all options:
python main_iri_handover_3d.py --kernel_size 10 --dct_n 20 --input_n 50 --output_n 40 --skip_rate 5 --batch_size 256 --test_batch_size 128 --in_features 27 --num_heads 5 --goal_features 3 --obstacles_condition --fusion_model 1 --phase --intention
Simply move the recorded weights from the corresponding checkpoint folder to the root folder and execute the script:
python main_iri_handover_3d.py --kernel_size 10 --dct_n 20 --input_n 50 --output_n 40 --skip_rate 5 --batch_size 256 --test_batch_size 128 --in_features 27 --num_heads 5 --goal_features 3 --obstacles_condition --fusion_model 1 --phase --intention --is_load --is_eval
This code is a variation of the work done by Wei Mao, Miaomiao Liu, Mathieu Salzmann.Wei Mao, Miaomiao Liu, Mathieu Salzmann in the paper [History Repeats Itself: Human Motion Prediction via Motion Attention] (https://arxiv.org/abs/2007.11755) presented in ECCV 2020. The overall code framework (dataloading, training, testing etc.) is adapted from 3d-pose-baseline.
The predictor model code is adapted from LTD.
Some of our evaluation code and data process code was adapted/ported from Residual Sup. RNN by Julieta.
MIT