megvii-research / MOTR

[ECCV2022] MOTR: End-to-End Multiple-Object Tracking with TRansformer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Growing memory

sjtuytc opened this issue · comments

Hi, I found that when I use 8 2080Ti to train this model. At the initial stage, the GPU memory occupation is 6/8, but the GPU memory is soon out. So do you have an explanation for this? And what's the suggested regime to train the MOTR model?

Hi, I found that when I use 8 2080Ti to train this model. At the initial stage, the GPU memory occupation is 6/8, but the GPU memory is soon out. So do you have an explanation for this? And what's the suggested regime to train the MOTR model?

Hi~, thanks for attention!
We train MOTR by increasing frame from 2 to 5 gradually, so you need GPU memory>=24G (e.g P40 or V100).

Hi, I found that when I use 8 2080Ti to train this model. At the initial stage, the GPU memory occupation is 6/8, but the GPU memory is soon out. So do you have an explanation for this? And what's the suggested regime to train the MOTR model?

Hi~, thanks for attention! We train MOTR by increasing frame from 2 to 5 gradually, so you need GPU memory>=24G (e.g P40 or V100).

Hi~, I use 3090 to train this model, but when the epoch growing to 150, the GPU memory is out. How can I solve this problem?

commented

I suppose you can modify the argument '--sampler_lengths' smaller ,such as replace the[2,3,4,5] with [2,3,4,4]