dk-liang / CLTR

[ECCV 2022] An End-to-End Transformer Model for Crowd Localization

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

mae is same when training

SherlockHolmes221 opened this issue · comments

mae is same when training my own dataset

How many GPUs do you use?

I also have the same problem when using the jhu and NWPU datasets. I use one GPU.

I also have the same problem when using the jhu and NWPU datasets. I use one GPU.

Try to use my default command

Can you explain it in more detail? I don't quite understand what you said, thank you.

Can you explain it in more detail? I don't quite understand what you said, thank you.

Can you try to run with 4 GPUs, I haven't run with 1 GPU.

How many GPUs do you use?

1 GPU and only have one thus can not test on 4

Can you explain it in more detail? I don't quite understand what you said, thank you.

can not predict any person thus the mae not change

Can you explain it in more detail? I don't quite understand what you said, thank you.

Can you try to run with 4 GPUs, I haven't run with 1 GPU.

Okay, I'll give it a try, later. Thank you.

Can you explain it in more detail? I don't quite understand what you said, thank you.

can not predict any person thus the mae not change

May I ask if the same problem of Mae has been solved?

Can you explain it in more detail? I don't quite understand what you said, thank you.

can not predict any person thus the mae not change

May I ask if the same problem of Mae has been solved?

No, did not solve

I haven't tried to train on 1 GPU and may need to adjust some hyperparameters.

I haven't tried to train on 1 GPU and may need to adjust some hyperparameters.

Which parameters?

I also cannot reproduce the results using your repository.

I checked the training procedure using 1, 2, and 4 GPUs and all generated the same, unchanging MAE.

@SherlockHolmes221 did you solve it? Why did you mark it as completed without any answer?

commented

Why did you mark it as completed without any answer? What are you avoiding??