postech-ami / FastMETRO

[ECCV'22] Official PyTorch Implementation of "Cross-Attention of Disentangled Modalities for 3D Human Mesh Recovery with Transformers"

Home Page:https://fastmetro.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Time cost and memory cost of FastMETRO

MooreManor opened this issue · comments

@FastMETRO

Hello! FastMETRO is a nice work. I want to know the memory and time cost of FastMETRO of two versions when training.

Under the setting of per_gpu_train_batch_size 16 and mixed datasets, how long to train FastMETRO of two versions for one epoch on your cards, and how much GPU memory do you cost on one single card?

Hello,

We conduct single-node distributed training using a machine with 4 NVIDIA V100 GPUs (16GB RAM). We set --per_gpu_train_batch_size as 16 and --num_workers as 4.


The time and memory cost for the training on the mixed datasets in our environment:

[ FastMETRO-S-R50 ]

  • Training time for each epoch: ~ 0.6 hours
  • Memory used in each GPU: ~ 5 GB

[ FastMETRO-L-H64 ]

  • Training time for each epoch: ~ 1.5 hours
  • Memory used in each GPU: ~ 14 GB

Thanks for your interest in our work!!

Please reopen this issue if you need more help regarding this.