megvii-research / MOTR

[ECCV2022] MOTR: End-to-End Multiple-Object Tracking with TRansformer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

I want to knonw MOTR's performance on MOT17 train dataset because I got a very high MOTA

Soulmate7 opened this issue · comments

commented

Hi! Congratulations for your nice work.
When I try to use the supported model to run eval.py, I get a very shocking results:
iShot_2022-11-11_18 24 25

Then I use my trained model which runs 70+ epochs on crowdhuman and mot17 to eval and get the following result:

image

I am shocked by it's performance on the train dataset, so I wonder know is it normal?

commented

Hello. I try to follow the MOTR work, and I also get the same shocking MOTA, have you find the reason?

Hi! Congratulations for your nice work. When I try to use the supported model to run eval.py, I get a very shocking results: iShot_2022-11-11_18 24 25

Then I use my trained model which runs 70+ epochs on crowdhuman and mot17 to eval and get the following result:

image

I am shocked by it's performance on the train dataset, so I wonder know is it normal?

commented

The high performance on MOT17 training set may be attributed to overfitting, as MOT17 is a small dataset with only 5k frames. Therefore, many works use additional data such as CrowdHuman to mitigate this issue.

A possible reason for the performance gap is the short training iterations (only 70 epochs).

Hi! Congratulations for your nice work. When I try to use the supported model to run eval.py, I get a very shocking results: iShot_2022-11-11_18 24 25

Then I use my trained model which runs 70+ epochs on crowdhuman and mot17 to eval and get the following result:

image

I am shocked by it's performance on the train dataset, so I wonder know is it normal?