SvipRepetitionCounting / TransRAC

(CVPR 2022 Oral) Official implemention: TransRAC

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Regarding the performance indicators of Zhang et al. [39] in the paper

salute-hh opened this issue · comments

commented

I used Zhang et al. [39]'s method to retrain the data divided by RepCountA, and the MAE and OBO on RepCountA were significantly different from what you reported in the original report, and even the indicators of MAE and OBO were exactly the opposite. My MAE and OBO on the RepCountA test set using Zhang et al. [39]'s method are approximately 0.14 and 0.85. Additionally, from the original text of Zhang et al. [39], it can be seen that his method is not so poor. I hope you can check and look forward to your reply.

Hi, I do not know what dataset you used. For fair comparison, we train with the same npz 64 frames files as our TransRAC used. We train it with 100 epochs, which takes around 2 days. And here is our validation log.

val_Repcount.log

On test set The OBO and MAE are around 0.1554 and 0.8786 respectively. Therefore, we report as it.

If you have any other question, feel free to reopen the issue.

commented

Thank you very much for your answer! I still have the following question: Do you train directly according to the code published by Zhang? Have you made any modifications to the configuration? For example, the parameter of the training set n _ samples_ for_ each_ video =10。Can the configuration file be made public if modified? Looking forward to your reply, thank you very much~

I use the default setting of the paper as here (https://github.com/Xiaodomgdomg/Deep-Temporal-Repetition-Counting/blob/master/opts.py#L53) and do not change any default setting.

commented

Thank you very much for your reply, which has answered my confusion for many days. You said: train with the same npz 64 frame files as TransRAC. I think you have modified the code for data loading and processing based on Zhang. Can this part of the code be made public? Looking forward to your reply, thank you very much!

This code is irrelevant to this github repo, if you need it, you can leave your email here, I will send it to you.

commented

Thank you very much! My email is: count123123@163.com. If possible, please send the complete code of project . Thank you again for your reply!

commented

At present, my computing power is also limited, and I need to modify some configurations such as batch_ size to run Zhang's method. Sorry to bother you again, for the sake of fair comparison in the future (i.e. not changing the training configuration), could you please provide the weights that you trained on the RepCount dataset using Zhang's method? This will make it easier for me to directly cite your results in future articles.
Looking forward to your reply~~~