chahuja / language2pose

Language2Pose: Natural Language Grounded Pose Forecasting

Home Page:http://chahuja.com/language2pose

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

evaluation

seyeeet opened this issue · comments

I was not able to achieve the same number as provided in the paper for the evaluation part.
is it possible to provide the code for the evaluation metric and the way that it should be run?

Where are you getting the numbers from exactly? Are you training your own model, or are these numbers the ones from the pre-trained model?

thank you for your reply. I trained the model based on the code. should I evaluate the pretrained model to get the numbers in the paper? can you please let me know the script that I need to run for it?

I also have another question regarding the evaluation. in the paper, you mentioned that you downsample the motion sequence by 8(12.5Hz from 100 Hz), and each sequence has fix length T at the end.

what happens when we have a sequence that its length is more than T, KIT dataset has different lengths. how should I deal with that condition? I did not understand how did you deal with that, I look into the Alg description as well but still confuse what is happening:
image
under what condition we go inside the last if?
and when you do t<-2t, are you doubling the T?
Not sure why we initialize the maxvl loss to inf every time?

Hello,
I would also like to know how to get the numbers in table 1 in the paper. Even using the pre-trained model I am not able to generate these numbers for individual joints. If possible please share the code for the evaluation metric APE you have used.