quancore / social-lstm

Social LSTM implementation in PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Datasets used are different from the original ones: What preprocessings have been done to obtain them?

setarehc opened this issue · comments

I have realized that the datasets used in your code are different from the original datasets I found from their sources.
To go from the original, raw datasets to ones you've used here, I can see that you have ordered data entries based on ped-id and frame-num, have deleted sequences shorter than 20 frames, and for sequences longer than 20 frames, you have deleted the excess frames.
However, the x-y position values in your datasets are different from the original ones. I suppose you have used some kind of preprocessing/transformation to obtain them but I couldn't find any code or explanation for this part. Therefore, I went ahead and applied the homography mapping to go from image to world coordinates just like done in: https://github.com/t2kasa/social_lstm_keras_tf. Which is to obtain image coordinates from world coordinates by using the inverse of the homography matrix and then, normalizing them by dividing the x-y position values by image_size.
However, the x-y position values still don't match the ones in your datasets.
Could you share the type of preprocessing and transformations you have applied to the raw-original datasets to get to what you use here?

The dataset was obtained from http://trajnet.epfl.ch/ because it was a semester project to implement another algorithm for benchmarking purpose. So I got the data already preprocessed. You can ask further question to the website.

Great, thanks for the link. I couldn't find any information about the types of preprocessing performed on the original datasets in the link provided. However, I can contact the owners/partners and ask my questions.

commented

@setarehc hello,have you find the way to process the data?