aras62 / SF-GRU

Pedestrian Action Anticipation using Contextual Feature Fusion in Stacked RNNs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Testing the pretrained model

golnazhabibi3 opened this issue · comments

Hi, thanks for impressive work! I want to only run the test (using the pretrained you have provided) but when I run, I get this error:

File "/home/SF-GRU/test.py", line 9, in
beh_seq_test = imdb.generate_data_trajectory_sequence('test', **data_opts)
File "/home/SF-GRU/pie_data.py", line 891, in generate_data_trajectory_sequence
sequence_data = self._get_crossing(image_set, annot_database, **params)
File "/home/SF-GRU/pie_data.py", line 1003, in _get_crossing
set_ids, _pids = self._get_data_ids(image_set, params)
File "/home/SF-GRU/pie_data.py", line 791, in _get_data_ids
_pids = self._get_random_pedestrian_ids(image_set, **params['random_params'])
File "/home/SF-GRU/pie_data.py", line 724, in _get_random_pedestrian_ids
train_samples, test_samples = train_test_split(ped_ids, train_size=ratios[0])
File "/home/SF-GRU/pedintent/lib/python3.5/site-packages/sklearn/model_selection/_split.py", line 2122, in train_test_split
default_test_size=0.25)
File "/home/SF-GRU/pedintent/lib/python3.5/site-packages/sklearn/model_selection/_split.py", line 1805, in _validate_shuffle_split

ValueError: With n_samples=0, test_size=None and train_size=0.5, the resulting train set will be empty. Adjust any of the aforementioned parameters.

I have put the test annotations in the following format:
SF-GRU
pie_dataset
annotations
set01
video_0001_annt.xml
video_0002_annt.xml
video_0003_annt.xml
video_0004_annt.xml

Do you know what the issue is? If you could be provide more detail about the test and train file (or sample.py file in the repo), that will be very helpful. Thanks!

Update: I have fixed that issue but I got other errors in get_data_sequence, it complains about the index is out of range (for d) for some scenes that is not long enough, I tried to fix that by ignoring that sequence, but i have got another error when assigning d['act'] = d['act'][:,0,:], by ignoring that, I have got this error in load_images_crop_and_process that complains about no image is found ...
I have used the following code for test:

from sf_gru import SFGRU
from pie_data import PIE

data_opts = { 'seq_type': 'crossing'}
imdb = PIE(data_path='/home/SF-GRU/pie_dataset/')

method_class = SFGRU()
beh_seq_test = imdb.generate_data_trajectory_sequence('test', **data_opts)
saved_files_path = '/home/SF-GRU/data/models/pie/sf-rnn/'
acc , auc, f1, precision, recall = method_class.test(beh_seq_test, saved_files_path)

Also setting training and test, and val image in pie_data as this (I am not training though)

def _get_image_set_ids(self, image_set):
    """
    Returns default image set ids
    :param image_set: Image set split
    :return: Set ids of the image set
    """
    image_set_nums = {'train': ['set01'],
                      'val': ['set02'],
                      'test': ['set03'],
                      'all': ['set01','set02','set03']}
    return image_set_nums[image_set]

I have got pie_data.py from PIE annotation repositiory

Thanks for your help!

One thing you might want to check first is to make sure to delete the dataset cache file and run it again. If you change the sets in _get_image_set_ids it might have an issue with the cached file. Second thing is that what is the dimension of d['act']? it should be three dimensions as [num_samples, seq_length, 1] and after d['act'] = d['act'][:,0,:] should be [num_samples,1]. If you changed anything in the get_sequence, it might affect that

When i run this code,i also meet this error that "too many indices for array" of the code "d['acts'] = d['acts'][:, 0, :]", so i think there may be some problems of the "d[' acts']"

Hi,

Thanks for using the code. I just have tested the code and everything works out of the box. However, I added a sample script that you can run to make sure the code is running correctly. As for your issue, are you using the model on part of the data? if so you need to make sure the action data is 3D dimensional as described above.

At first thank for your reply sincerely, and i'm sure that i used the model of the path that 'data/models/..../.pkl'. And after running the new script that you added,it pop out a new error, '' KeyError:'images'" for the code "data[k]['pose'] = self.get_pose(data[k]['images']"

Hi
I try your new script using the JAAD datasets,This is my running code:
import os
from sf_gru import SFGRU
from pie_data import PIE
from jaad_data import JAAD
data_opts ={'fstride': 1,
'subset': 'default',
'data_split_type': 'random', # kfold, random, default
'seq_type': 'crossing',
'min_track_size': 75} ## for obs length of 15 frames + 60 frames tte. This should be adjusted for different setup
imdb = JAAD(data_path='./JAAD_JAAD_2.0/')
model_opts = {'obs_input_type': ['local_box', 'local_context', 'pose', 'box', 'speed'],
'enlarge_ratio': 1.5,
'pred_target_type': ['crossing'],
'obs_length': 15, # Determines min track size
'time_to_event': 60, # Determines min track size
'dataset': 'pie',
'normalize_boxes': True}
method_class = SFGRU()
saved_files_path ='./data/models/pie/sf-rnn/'
beh_seq_train = imdb.generate_data_trajectory_sequence('train', **data_opts)
saved_files_path = method_class.train(beh_seq_train, model_opts=model_opts)

HI,

So you need to generate poses if intend to train the full model. In case you don't have the pose information, you can remove if from input type as follows:'obs_input_type': ['local_box', 'local_context', 'box', 'speed'].

Hi
And how to solve the error 'KeyError: 'images'' from the code "if k in set_poses[set_id][vid_id].keys():" ,and the position of this code is at the 236 line of the sf_action.py. thank you very much!

Hi
Thank you very much , i try it again and train the model,after that, the code is running succesfully.