aosokin / os2d

OS2D: One-Stage One-Shot Object Detection by Matching Anchor Features

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

make the code faster

opened this issue · comments

hi again nice work

I went over your code and I found some lines that look like duplicate
in head.py file lines 393 - 402:

output_recognition = self.resample_of_correlation_map_fast(
 cor_maps_for_recognition,                                                             
 resampling_grids_fm_coord_unit, 
 self.class_pool_mask)

if output_recognition.requires_grad:            
    output_recognition_transform_detached = self.resample_of_correlation_map_fast(
        cor_maps_for_recognition,
        resampling_grids_fm_coord_unit.detach(),
        self.class_pool_mask)
else:
    # Optimization to make eval faster
    output_recognition_transform_detached = output_recognition

why you didn't do just

if output_recognition.requires_grad:            
    output_recognition_transform_detached = self.resample_of_correlation_map_fast(
        cor_maps_for_recognition,
        resampling_grids_fm_coord_unit.detach(),
        self.class_pool_mask)
else:
    output_recognition_transform_detached = self.resample_of_correlation_map_fast(
        cor_maps_for_recognition,                                                             
        resampling_grids_fm_coord_unit, 
        self.class_pool_mask)

because if the output_recognition.requires_grad is true you run it twice

Hi, we need the two computations because we need both output_recognition and output_recognition_transform_detached for training: one is used for positives and another is used for negatives. This should not be called at the test time as output_recognition.requires_grad should be False.