Please refer to install.md for detailed installation.
Due to the requirement of a large batchsize, we highly recommend you to use DDP training. A slurm-based script is as below:
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \
srun -p ${PARTITION} -n8 --gres=gpu:8 -u \
python -u tools/train.py \
--name t2m_sample \
--batch_size 128 \
--times 200 \
--num_epochs 50 \
--dataset_name t2m \
--distributed
Otherwise, you can run the training code on a single GPU like:
PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \
python -u tools/train.py \
--name t2m_sample \
--batch_size 128 \
--times 200 \
--num_epochs 50 \
--dataset_name t2m
# GPU_ID indicates which gpu you want to use
python -u tools/evaluation.py checkpoints/kit/kit_motiondiffuse/opt.txt GPU_ID
# Or you can omit this option and use cpu for evaluation
python -u tools/evaluation.py checkpoints/kit/kit_motiondiffuse/opt.txt
You can visualize human motion with the given language description and the expected motion length. We also provide a Colab Demo and a Hugging Face Demo for your convenience.
# Currently we only support visualization of models trained on the HumanML3D dataset.
# Motion length can not be larger than 196, which is the maximum length during training
# You can omit `gpu_id` to run visualization on your CPU
python -u tools/visualization.py \
--opt_path checkpoints/t2m/t2m_motiondiffuse/opt.txt \
--text "a person is jumping" \
--motion_length 60 \
--result_path "test_sample.gif" \
--gpu_id 0
Here are some visualization examples. The motion lengths are shown in the title of animations.
Note: You may install matplotlib==3.3.1
to support visualization here.
This code is developed on top of Generating Diverse and Natural 3D Human Motions from Text# sign_diffusion