ClementPinard / SfmLearner-Pytorch

Pytorch version of SfmLearner from Tinghui Zhou et al.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Questions about Smooth Loss Weight and batch size

alexlopezcifuentes opened this issue · comments

Hi Clement!

Thanks a lot for the work coding this paper in PyTorch. I have two questions:

  • The first one is regarding the weight that applies to Smooth Loss. According to the original paper, this weight (lambda_s) is related to the scale of the depth as lambda_s = 0.5/s. However, according to this line of the code, this scale relation is missing.
    Is this an error? Did you tried the original weighting policy and discarded?

    loss = w1*loss_1 + w2*loss_2 + w3*loss_3

    Did you try to relate that weight with the scale?

  • The second question is regarding the batch size. In your case, how much GPU memory is using a batch size of 4? I'm trying to train the networks in other dataset and the GPU, with that batch size (plus the networks), is just using 2-3 Gb which seems to be really low. Am I doing something wrong or is it the regular behaviour? If that is the case, have you tried to increase batch size?

Thanks in advance!

Hi, thanks for your interest in my repo

  1. The smooth loss is computed with scale taken in count. The term we add here is the sum of all scale aware smooth losses that are computed in this function : https://github.com/ClementPinard/SfmLearner-Pytorch/blob/master/loss_functions.py#L70 (notice the weight /= 2.3 each time we go down a scale)
  2. Empiric tests showed that batch size produced the best results. For my tests (2018), it was enough so that the CPU was not the bottleneck and the GPU was used at 100% computing capacity (not memory). Now for more recent works, there has been some tests with mutli gpu. You can have a look at e.g. competitive collaboration : https://github.com/anuragranj/cc