megvii-research / MOTR

[ECCV2022] MOTR: End-to-End Multiple-Object Tracking with TRansformer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About 'memory-optimized version' mentioned in paper

HELLORPG opened this issue · comments

Hi, I've read your paper in ECCV 2022, believe that your research is truly meaningful.

I'm trying to reproduce your experiment. At section 4.2 in your paper, you have mentioned that "provide a memory-optimized version that can be trained on NVIDIA 2080 Ti GPUs". But I didn't find any details in this repository.

Did you release the memory-optimized code? If not, will you release this part?

Thanks a lot for your contribution.

Yes, checkpointing (memory optimization) is released.
Just add the --use_checkpoint argument (I think it is already added in the script).

It's my fault that misunderstood the meaning of the --use_checkpoint argument.
Thanks a lot!

It's my fault that misunderstood the meaning of the argument. Thanks a lot!--use_checkpoint

Hi,is the memory-optimized version trained on 8 2080ti or only single 2080ti?

It's my fault that misunderstood the meaning of the argument. Thanks a lot!--use_checkpoint

Hi,is the memory-optimized version trained on 8 2080ti or only single 2080ti?

I used this argument and trained on 8 TiTAN XP.

Using fewer cards for training will drop the performance.

Hi,is the memory-optimized version trained on 8 2080ti or only single 2080ti?

And if you use 2080Ti for training, 11GB CUDA Memory may not be enough. But you can change some code, then I think it will be fine on 2080Ti.

Hi,is the memory-optimized version trained on 8 2080ti or only single 2080ti?

And if you use 2080Ti for training, 11GB CUDA Memory may not be enough. But you can change some code, then I think it will be fine on 2080Ti.

thanks for your prompt reply,thanks a lot!

thanks for your prompt reply,thanks a lot!

😄 You're welcome.