chenyilun95 / tf-cpn

Cascaded Pyramid Network for Multi-Person Pose Estimation (CVPR 2018)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training loss fluctuates

boyuansun opened this issue · comments

Hello, I train resnet101 of 384x288 with batchsize 16 and lr 1.5625e-05, the loss fluctuates between 60-100. Is it normal?

I also train resnet101 of 384x288,my batch_size is 24
06-25 09:26:36 Epoch 351 itr 21061536/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 21.4816 refine_loss: 34.5009 loss: 55.9825 06-25 09:26:37 Epoch 351 itr 21061632/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 18.6679 refine_loss: 28.4806 loss: 47.1486 06-25 09:26:38 Epoch 351 itr 21061728/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 18.8858 refine_loss: 28.5278 loss: 47.4136 06-25 09:26:39 Epoch 351 itr 21061824/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 24.7110 refine_loss: 38.7275 loss: 63.4385 06-25 09:26:40 Epoch 351 itr 21061920/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 19.4357 refine_loss: 33.6059 loss: 53.0416 06-25 09:26:42 Epoch 351 itr 21062016/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 21.2636 refine_loss: 36.7295 loss: 57.9931 06-25 09:26:43 Epoch 351 itr 21062112/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 18.0329 refine_loss: 32.2765 loss: 50.3094 06-25 09:26:44 Epoch 351 itr 21062208/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 17.2394 refine_loss: 27.5176 loss: 44.7570 06-25 09:26:45 Epoch 351 itr 21062304/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 16.4739 refine_loss: 27.0160 loss: 43.4899 06-25 09:26:46 Epoch 351 itr 21062400/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 18.3966 refine_loss: 36.0244 loss: 54.4210 06-25 09:26:48 Epoch 351 itr 21062496/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 19.4529 refine_loss: 32.0978 loss: 51.5507 06-25 09:26:49 Epoch 351 itr 21062592/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 21.5119 refine_loss: 34.4650 loss: 55.9769 06-25 09:26:50 Epoch 351 itr 21062688/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 18.6829 refine_loss: 28.4995 loss: 47.1824 06-25 09:26:51 Epoch 351 itr 21062784/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 18.9842 refine_loss: 34.5742 loss: 53.5584 06-25 09:26:52 Epoch 351 itr 21062880/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 19.8309 refine_loss: 33.7855 loss: 53.6165 06-25 09:26:54 Epoch 351 itr 21062976/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 19.5187 refine_loss: 31.1628 loss: 50.6815 06-25 09:26:55 Epoch 351 itr 21063072/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 21.1973 refine_loss: 38.4505 loss: 59.6478 06-25 09:26:56 Epoch 351 itr 21063168/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 15.4164 refine_loss: 25.8850 loss: 41.3013 06-25 09:26:57 Epoch 351 itr 21063264/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 17.2441 refine_loss: 29.4565 loss: 46.7006 06-25 09:26:59 Epoch 351 itr 21063360/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 21.7971 refine_loss: 37.6570 loss: 59.4541 06-25 09:27:00 Epoch 351 itr 21063456/60000: lr: 1.5625e-05 speed: 1.21(0.94s r0.25)s/itr 0.21h/epoch global_loss: 19.3076 refine_loss: 32.4295 loss: 51.7371
It's my loss, hope it can be helpful to you

@juciny Thanks

@juciny what GPU did you used for this? Also, do you think I need to train it to epoch 350?

I tried retraining and stopped early (epoch 100 or so) and the results look far worse. Do you get a significant performance improvement as you continue?