lmb-freiburg / flownet2

FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks

Home Page:https://lmb.informatik.uni-freiburg.de/Publications/2017/IMKDB17/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Test loss stuck at 10.8 while training

adhara123007 opened this issue · comments

I tried training the Flownet2-CSS (only net3) with my own dataset. The test loss is stuck at 10.8. Do you have any idea why?

I0316 21:46:32.378043 8509 solver.cpp:229] Iteration 400, loss = 10.08
I0316 21:46:32.378368 8509 solver.cpp:245] Train net output #0: net3_flow_loss2 = 896 (* 0.005 = 4.48 loss)
I0316 21:46:32.378394 8509 solver.cpp:245] Train net output #1: net3_flow_loss3 = 224 (* 0.01 = 2.24 loss)
I0316 21:46:32.378402 8509 solver.cpp:245] Train net output #2: net3_flow_loss4 = 56 (* 0.02 = 1.12 loss)
I0316 21:46:32.378419 8509 solver.cpp:245] Train net output #3: net3_flow_loss5 = 14 (* 0.08 = 1.12 loss)
I0316 21:46:32.378427 8509 solver.cpp:245] Train net output #4: net3_flow_loss6 = 3.5 (* 0.32 = 1.12 loss)
I0316 21:46:32.378453 8509 sgd_solver.cpp:106] Iteration 400, lr = 1e-05
I0316 21:47:28.432951 8509 solver.cpp:229] Iteration 450, loss = 10.08
I0316 21:47:28.433193 8509 solver.cpp:245] Train net output #0: net3_flow_loss2 = 896 (* 0.005 = 4.48 loss)
I0316 21:47:28.433209 8509 solver.cpp:245] Train net output #1: net3_flow_loss3 = 224 (* 0.01 = 2.24 loss)
I0316 21:47:28.433218 8509 solver.cpp:245] Train net output #2: net3_flow_loss4 = 56 (* 0.02 = 1.12 loss)
I0316 21:47:28.433226 8509 solver.cpp:245] Train net output #3: net3_flow_loss5 = 14 (* 0.08 = 1.12 loss)
I0316 21:47:28.433245 8509 solver.cpp:245] Train net output #4: net3_flow_loss6 = 3.5 (* 0.32 = 1.12 loss)
I0316 21:47:28.433257 8509 sgd_solver.cpp:106] Iteration 450, lr = 1e-05
I0316 21:48:23.904021 8509 solver.cpp:456] Snapshotting to binary proto file flow_iter_500.caffemodel
I0316 21:48:29.792264 8509 sgd_solver.cpp:273] Snapshotting solver state to binary proto file flow_iter_500.solverstate
I0316 21:48:35.446125 8509 solver.cpp:338] Iteration 500, Testing net (#0)
I0316 21:48:35.446193 8509 net.cpp:695] Ignoring source layer CustomData1
I0316 21:48:35.446202 8509 net.cpp:695] Ignoring source layer blob0_CustomData1_0_split
I0316 21:48:35.446207 8509 net.cpp:695] Ignoring source layer blob1_CustomData1_1_split
I0316 21:48:35.446210 8509 net.cpp:695] Ignoring source layer blob2_CustomData1_2_split
I0316 21:48:41.494271 8509 solver.cpp:406] Test net output #0: net3_flow_loss2 = 896 (* 0.005 = 4.48 loss)
I0316 21:48:41.494333 8509 solver.cpp:406] Test net output #1: net3_flow_loss3 = 224 (* 0.01 = 2.24 loss)
I0316 21:48:41.494343 8509 solver.cpp:406] Test net output #2: net3_flow_loss4 = 56 (* 0.02 = 1.12 loss)
I0316 21:48:41.494352 8509 solver.cpp:406] Test net output #3: net3_flow_loss5 = 14 (* 0.08 = 1.12 loss)
I0316 21:48:41.494359 8509 solver.cpp:406] Test net output #4: net3_flow_loss6 = 3.5 (* 0.32 = 1.12 loss)
I0316 21:48:42.251804 8509 solver.cpp:229] Iteration 500, loss = 10.08
I0316 21:48:42.251884 8509 solver.cpp:245] Train net output #0: net3_flow_loss2 = 896 (* 0.005 = 4.48 loss)
I0316 21:48:42.251895 8509 solver.cpp:245] Train net output #1: net3_flow_loss3 = 224 (* 0.01 = 2.24 loss)
I0316 21:48:42.251919 8509 solver.cpp:245] Train net output #2: net3_flow_loss4 = 56 (* 0.02 = 1.12 loss)
I0316 21:48:42.251935 8509 solver.cpp:245] Train net output #3: net3_flow_loss5 = 14 (* 0.08 = 1.12 loss)
I0316 21:48:42.251946 8509 solver.cpp:245] Train net output #4: net3_flow_loss6 = 3.5 (* 0.32 = 1.12 loss)
I0316 21:48:42.251957 8509 sgd_solver.cpp:106] Iteration 500, lr = 1e-05
I0316 21:49:38.605291 8509 solver.cpp:229] Iteration 550, loss = 10.08
I0316 21:49:38.605636 8509 solver.cpp:245] Train net output #0: net3_flow_loss2 = 896 (* 0.005 = 4.48 loss)
I0316 21:49:38.605666 8509 solver.cpp:245] Train net output #1: net3_flow_loss3 = 224 (* 0.01 = 2.24 loss)
I0316 21:49:38.605680 8509 solver.cpp:245] Train net output #2: net3_flow_loss4 = 56 (* 0.02 = 1.12 loss)
I0316 21:49:38.605692 8509 solver.cpp:245] Train net output #3: net3_flow_loss5 = 14 (* 0.08 = 1.12 loss)
I0316 21:49:38.605749 8509 solver.cpp:245] Train net output #4: net3_flow_loss6 = 3.5 (* 0.32 = 1.12 loss)

All your intermediate losses are always the same, too... any chance that your dataset is a bit wonky? As in, just one sample? 🙂

I don't think so. I actually modifed the convert_imageset_and_flow.cpp to create lmdb database from grayscale images. Then I changed the train.protxt to remove data augmentations related to color and then I ran the network for training. I also did suitable modifications like
correcting the slice points in the custom data layer.

Hm. Did you check if the data looks feasible after augmentation? Since all your losses are constant, maybe your data does not make it all the way through the network.

Thanks!! That helped.
Can you help me with one more thing ? the definition of the parameters during data augmentation layers you guys have defined.

There is a mean and spread. What is the parameter exp and prob?
Is it the p value in Bernoulli distribution?

exp/mean/spread are explained in src/caffe/proto/caffe.proto (line 600 et seqq). prob specifies the Bernoulli probability that this augmentation is applied at all.

Thanks again !!