aras62 / SF-GRU

Pedestrian Action Anticipation using Contextual Feature Fusion in Stacked RNNs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

One thing you might want to check first is to make sure to delete the dataset cache file and run it again. If you change the sets in _get_image_set_ids it might have an issue with the cached file. Second thing is that what is the dimension of d['act']? it should be three dimensions as [num_samples, seq_length, 1] and after d['act'] = d['act'][:,0,:] should be [num_samples,1]. If you changed anything in the get_sequence, it might affect that

nehasharma2k20phd507 opened this issue · comments

One thing you might want to check first is to make sure to delete the dataset cache file and run it again. If you change the sets in _get_image_set_ids it might have an issue with the cached file. Second thing is that what is the dimension of d['act']? it should be three dimensions as [num_samples, seq_length, 1] and after d['act'] = d['act'][:,0,:] should be [num_samples,1]. If you changed anything in the get_sequence, it might affect that

Originally posted by @aras62 in #1 (comment)

Hello authors, firstly thank you for your contribution. I just wanted to take your help in fixing this issue. I have taken the PIE dataset from the repository and having this index out of dimensions error. I am also attaching screenshot of the output screen when running on colab. Moreover as you suggested to check the dimensions. I am getting dimensions of all the elements of dictionary d as (num_samples, ). So does this dataset require some preprocessing? as I am also getting some visibledeprecation warning.
issue

Also I am not changing anything in get_sequence nor I am running on some part of dataset. I am running on complete 6 set of videos. Kindly help