kevinlin311tw / Caffe-DeepBinaryCode

Supervised Semantics-preserving Deep Hashing (TPAMI18)

Home Page:https://arxiv.org/abs/1507.00101v2

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training doesn't converge when training on the ILSVRC12 dataset

willard-yuan opened this issue · comments

@kevinlin311tw Hi kevin,

Recently I have used the DeepBinaryCode to train CNN model on the ILSVRC12 dataset. The train_val.prototxt and the solver.prototxt is taken from SSDH-VGG16-48.

I changed the latent layer in train_val.prototxt (L498) to make sure the feature encoded with 512 bits. As the ILSVRC12 dataset contains 1000 classes, I also changed the fc8_classification layer to 1000 (L574).

All seemed right. But training doesn't converge, after iterating 1000 times, the accuracy is still 0.001, i.e. 1/1000.

Any advice or help for me so I can make the training converge on the ILSVRC12 dataset?

Thanks.

I would suggest to adjust the solver for training SSDH on ILSVRC12.

You may want to check our training logs as below.
Log-ft1: https://www.dropbox.com/s/c4xy8byixjsp27n/log-ft1.txt?dl=0
Log-ft2: https://www.dropbox.com/s/uyvsu639kdvond9/log-ft2.txt?dl=0

Very thanks for providing the logs. It's very useful to me, and I'll try it today. The issue will be closed if the training converges.

@kevinlin311tw The Log-ft1 is the log you first firstly fine-tuned the latent_layer and fc8_classification layers, then you used the fine-tuned model on the first stage to fine-tune all the layers of the VGGNet16 network, as it shows in Log-ft1.

I follow the same parameters setting with the Log-ft1, and the solver.txt of mine is as follows:

net: "train_val.prototxt"
test_iter: 1000
test_interval: 10000
base_lr: 0.001
display: 1000
max_iter: 50000
lr_policy: "step"
gamma: 0.1
momentum: 0.9
weight_decay: 0.0005
stepsize: 160000
snapshot: 10000
snapshot_prefix: "./models/snapshots_"
solver_mode: GPU
device_id: 0
random_seed: 42

The training processing still doesn't converge. The loss in the log file is as follows:

I0516 08:51:04.900213 43914 solver.cpp:242] Iteration 0, loss = 7.15457
I0516 08:51:04.900370 43914 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.0204458 (* 1 = 0.0204458 loss)
I0516 08:51:04.900445 43914 solver.cpp:258]     Train net output #1: loss: classfication-error = 7.16763 (* 1 = 7.16763 loss)
I0516 08:51:04.900511 43914 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.0335118 (* 1 = -0.0335118 loss)
I0516 08:51:04.900588 43914 solver.cpp:571] Iteration 0, lr = 0.001
I0516 08:57:40.980046 43914 solver.cpp:242] Iteration 1000, loss = 6.99466
I0516 08:57:40.980351 43914 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.120729 (* 1 = 0.120729 loss)
I0516 08:57:40.980373 43914 solver.cpp:258]     Train net output #1: loss: classfication-error = 6.99564 (* 1 = 6.99564 loss)
I0516 08:57:40.980389 43914 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.121713 (* 1 = -0.121713 loss)
I0516 08:57:40.980406 43914 solver.cpp:571] Iteration 1000, lr = 0.001
I0516 09:04:17.163820 43914 solver.cpp:242] Iteration 2000, loss = 5.51559
I0516 09:04:17.164185 43914 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.123866 (* 1 = 0.123866 loss)
I0516 09:04:17.164208 43914 solver.cpp:258]     Train net output #1: loss: classfication-error = 5.51579 (* 1 = 5.51579 loss)
I0516 09:04:17.164224 43914 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.124067 (* 1 = -0.124067 loss)
I0516 09:04:17.164239 43914 solver.cpp:571] Iteration 2000, lr = 0.001
I0516 09:10:54.446645 43914 solver.cpp:242] Iteration 3000, loss = 7.46336
I0516 09:10:54.446966 43914 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.123695 (* 1 = 0.123695 loss)
I0516 09:10:54.446988 43914 solver.cpp:258]     Train net output #1: loss: classfication-error = 7.46433 (* 1 = 7.46433 loss)
I0516 09:10:54.447005 43914 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.124668 (* 1 = -0.124668 loss)
I0516 09:10:54.447023 43914 solver.cpp:571] Iteration 3000, lr = 0.001
I0516 09:17:32.938835 43914 solver.cpp:242] Iteration 4000, loss = 6.49413
I0516 09:17:32.939131 43914 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.113883 (* 1 = 0.113883 loss)
I0516 09:17:32.939152 43914 solver.cpp:258]     Train net output #1: loss: classfication-error = 6.49431 (* 1 = 6.49431 loss)
I0516 09:17:32.939169 43914 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.114066 (* 1 = -0.114066 loss)
I0516 09:17:32.939189 43914 solver.cpp:571] Iteration 4000, lr = 0.001
I0516 09:24:11.822963 43914 solver.cpp:242] Iteration 5000, loss = 7.80371
I0516 09:24:11.823267 43914 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.119942 (* 1 = 0.119942 loss)
I0516 09:24:11.823290 43914 solver.cpp:258]     Train net output #1: loss: classfication-error = 7.80376 (* 1 = 7.80376 loss)
I0516 09:24:11.823308 43914 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.120001 (* 1 = -0.120001 loss)
I0516 09:24:11.823329 43914 solver.cpp:571] Iteration 5000, lr = 0.001
I0516 09:30:50.927448 43914 solver.cpp:242] Iteration 6000, loss = 7.72446
I0516 09:30:50.929884 43914 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.124264 (* 1 = 0.124264 loss)
I0516 09:30:50.929910 43914 solver.cpp:258]     Train net output #1: loss: classfication-error = 7.72447 (* 1 = 7.72447 loss)
I0516 09:30:50.929926 43914 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.12427 (* 1 = -0.12427 loss)

I have tried to tune the parameters of base_lr and stepsize with different values. But it faced the same problem. The setting in train_val.prototxt file is also keep the same in Log-ft1. I have no idea what's going wrong with it.

Could you please figure out what's going wrong with it, so I can finish the experiment?

Thanks.

I am recently quite busy with deadlines.. Will come to you next week.
Probably you may want to check your training data, especially the labels

Very thanks for your help. I'll carefully check my training data.

@kevinlin311tw Thank you for your help and feel sorry to disturb you.

Is the validate set you used is the ILSVRC12 standard validate set? or merge the training dataset (1300 of each class) and validate set (50 of each class) and shuttle the dataset (1350 images of each class), then select 50 images of each class as validate set and the remaining as training dataset?

Oops, It's very strange. I also used the standard validation set. I'm sure label of the training data is right, and train_val.txt and solver.txt keep the same with the log-tf1.txt.

@kevinlin311tw Kevin, I'm going to close the issue since the training process converges. I have tried to merge the training dataset and validation set of each class, then shuttled and selected 50 images of each class for validation. The solver and network parameters setting kept the same as the logs you provided. Then the training process converged. However, If I adopt the standard validation set, it doesn't converge. So I'm curious about the validation set you adopted. The following is the logs I did fine-tuned:

log-ft1.txt

I0519 22:12:55.194981 41434 solver.cpp:346] Iteration 0, Testing net (#0)
I0519 22:17:55.519238 41434 solver.cpp:414]     Test net output #0: accuracy = 0.00104
I0519 22:17:55.519349 41434 solver.cpp:414]     Test net output #1: loss: 50%-fire-rate = 0.0245879 (* 1 = 0.0245879 loss)
I0519 22:17:55.519358 41434 solver.cpp:414]     Test net output #2: loss: classfication-error = 6.92079 (* 1 = 6.92079 loss)
I0519 22:17:55.519364 41434 solver.cpp:414]     Test net output #3: loss: forcing-binary = -0.0286786 (* 1 = -0.0286786 loss)
I0519 22:17:56.321672 41434 solver.cpp:242] Iteration 0, loss = 6.94887
I0519 22:17:56.321723 41434 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.02219 (* 1 = 0.02219 loss)
I0519 22:17:56.321733 41434 solver.cpp:258]     Train net output #1: loss: classfication-error = 6.95798 (* 1 = 6.95798 loss)
I0519 22:17:56.321738 41434 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.0312989 (* 1 = -0.0312989 loss)
I0519 22:17:56.321791 41434 solver.cpp:571] Iteration 0, lr = 0.001
I0519 22:27:43.324605 41434 solver.cpp:242] Iteration 1000, loss = 6.36861
I0519 22:27:43.325027 41434 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.00159123 (* 1 = 0.00159123 loss)
I0519 22:27:43.325047 41434 solver.cpp:258]     Train net output #1: loss: classfication-error = 6.39074 (* 1 = 6.39074 loss)
I0519 22:27:43.325055 41434 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.0237277 (* 1 = -0.0237277 loss)
I0519 22:27:43.387272 41434 solver.cpp:571] Iteration 1000, lr = 0.001
I0519 22:37:31.285012 41434 solver.cpp:242] Iteration 2000, loss = 4.81982
I0519 22:37:31.285357 41434 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.000464417 (* 1 = 0.000464417 loss)
I0519 22:37:31.285377 41434 solver.cpp:258]     Train net output #1: loss: classfication-error = 4.86978 (* 1 = 4.86978 loss)
I0519 22:37:31.285383 41434 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.0504298 (* 1 = -0.0504298 loss)
I0519 22:37:31.285429 41434 solver.cpp:571] Iteration 2000, lr = 0.001
I0519 22:47:19.229440 41434 solver.cpp:242] Iteration 3000, loss = 3.79659
I0519 22:47:19.230067 41434 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.000382894 (* 1 = 0.000382894 loss)
I0519 22:47:19.230108 41434 solver.cpp:258]     Train net output #1: loss: classfication-error = 3.85793 (* 1 = 3.85793 loss)
I0519 22:47:19.230118 41434 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.0617227 (* 1 = -0.0617227 loss)
I0519 22:47:19.322366 41434 solver.cpp:571] Iteration 3000, lr = 0.001
I0519 22:57:07.446933 41434 solver.cpp:242] Iteration 4000, loss = 3.18048
I0519 22:57:07.447273 41434 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.000468784 (* 1 = 0.000468784 loss)
I0519 22:57:07.447295 41434 solver.cpp:258]     Train net output #1: loss: classfication-error = 3.24265 (* 1 = 3.24265 loss)
I0519 22:57:07.447302 41434 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.0626391 (* 1 = -0.0626391 loss)
I0519 22:57:07.536197 41434 solver.cpp:571] Iteration 4000, lr = 0.001
I0519 23:06:55.284042 41434 solver.cpp:346] Iteration 5000, Testing net (#0)
I0519 23:11:56.569125 41434 solver.cpp:414]     Test net output #0: accuracy = 0.575019
I0519 23:11:56.569380 41434 solver.cpp:414]     Test net output #1: loss: 50%-fire-rate = 0.000477422 (* 1 = 0.000477422 loss)
I0519 23:11:56.569437 41434 solver.cpp:414]     Test net output #2: loss: classfication-error = 2.51677 (* 1 = 2.51677 loss)
I0519 23:11:56.569444 41434 solver.cpp:414]     Test net output #3: loss: forcing-binary = -0.0668283 (* 1 = -0.0668283 loss)
......

log-tf2.txt:

I0520 16:55:00.774651 46966 solver.cpp:414]     Test net output #0: accuracy = 0.715999
I0520 16:55:00.774756 46966 solver.cpp:414]     Test net output #1: loss: 50%-fire-rate = 0.000171341 (* 1 = 0.000171341 loss)
I0520 16:55:00.774765 46966 solver.cpp:414]     Test net output #2: loss: classfication-error = 1.24462 (* 1 = 1.24462 loss)
I0520 16:55:00.774770 46966 solver.cpp:414]     Test net output #3: loss: forcing-binary = -0.0872177 (* 1 = -0.0872177 loss)
I0520 16:55:02.043437 46966 solver.cpp:242] Iteration 0, loss = 1.27622
I0520 16:55:02.043531 46966 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 9.54664e-05 (* 1 = 9.54664e-05 loss)
I0520 16:55:02.043594 46966 solver.cpp:258]     Train net output #1: loss: classfication-error = 1.3661 (* 1 = 1.3661 loss)
I0520 16:55:02.043618 46966 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.0899786 (* 1 = -0.0899786 loss)
I0520 16:55:03.184846 46966 solver.cpp:571] Iteration 0, lr = 0.0001
I0520 17:26:17.174290 46966 solver.cpp:242] Iteration 1000, loss = 1.42905
I0520 17:26:17.174721 46966 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.000279333 (* 1 = 0.000279333 loss)
I0520 17:26:17.174746 46966 solver.cpp:258]     Train net output #1: loss: classfication-error = 1.52974 (* 1 = 1.52974 loss)
I0520 17:26:17.174752 46966 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.100963 (* 1 = -0.100963 loss)
I0520 17:26:17.975391 46966 solver.cpp:571] Iteration 1000, lr = 0.0001
I0520 17:57:16.574854 46966 solver.cpp:242] Iteration 2000, loss = 1.15161
I0520 17:57:16.575227 46966 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.00026061 (* 1 = 0.00026061 loss)
I0520 17:57:16.575251 46966 solver.cpp:258]     Train net output #1: loss: classfication-error = 1.2542 (* 1 = 1.2542 loss)
I0520 17:57:16.575259 46966 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.102855 (* 1 = -0.102855 loss)
I0520 17:57:17.375612 46966 solver.cpp:571] Iteration 2000, lr = 0.0001
I0520 18:28:15.260107 46966 solver.cpp:242] Iteration 3000, loss = 1.19722
I0520 18:28:15.260536 46966 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.000243434 (* 1 = 0.000243434 loss)
I0520 18:28:15.260560 46966 solver.cpp:258]     Train net output #1: loss: classfication-error = 1.30094 (* 1 = 1.30094 loss)
I0520 18:28:15.260568 46966 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.103961 (* 1 = -0.103961 loss)
I0520 18:28:16.060641 46966 solver.cpp:571] Iteration 3000, lr = 0.0001
I0520 18:59:14.391197 46966 solver.cpp:242] Iteration 4000, loss = 1.35231
I0520 18:59:14.391628 46966 solver.cpp:258]     Train net output #0: loss: 50%-fire-rate = 0.000150197 (* 1 = 0.000150197 loss)
I0520 18:59:14.391652 46966 solver.cpp:258]     Train net output #1: loss: classfication-error = 1.45515 (* 1 = 1.45515 loss)
I0520 18:59:14.391674 46966 solver.cpp:258]     Train net output #2: loss: forcing-binary = -0.102987 (* 1 = -0.102987 loss)
I0520 18:59:15.194217 46966 solver.cpp:571] Iteration 4000, lr = 0.0001
I0520 19:30:12.119384 46966 solver.cpp:346] Iteration 5000, Testing net (#0)
I0520 19:35:13.344359 46966 solver.cpp:414]     Test net output #0: accuracy = 0.73376
I0520 19:35:13.344537 46966 solver.cpp:414]     Test net output #1: loss: 50%-fire-rate = 0.000127667 (* 1 = 0.000127667 loss)
I0520 19:35:13.344547 46966 solver.cpp:414]     Test net output #2: loss: classfication-error = 1.08828 (* 1 = 1.08828 loss)
I0520 19:35:13.344554 46966 solver.cpp:414]     Test net output #3: loss: forcing-binary = -0.103791 (* 1 = -0.103791 loss)

You should check your data carefully. Your current setting is weird.

@kevinlin311tw I have carefully checked out my data many times, and tried many times