zhanggang001 / RefineMask

RefineMask: Towards High-Quality Instance Segmentation with Fine-Grained Features (CVPR 2021)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The loss suddenly boom

XiaoyuZHK opened this issue · comments

Hi, @zhanggang001 ,I‘m still a beginning level learner and I have a question that don't know how to fix it.
Without changing the RefineMask config(except DataRoot and num_classes),the loss suddenly boom in epoch1 or epoch2

2021-05-25 09:52:39,591 - mmdet - INFO - Epoch [1][550/1445] lr: 2.000e-02, eta: 3:27:36, time: 0.354, data_time: 0.002, memory: 5616, loss_rpn_cls: 0.1940, loss_rpn_bbox: 0.1124, loss_cls: 0.6903, acc: 89.3125, loss_bbox: 0.7711, loss_instance: 1.7145, loss_semantic: 1.1350, loss: 4.6172, grad_norm: 49.7330
2021-05-25 09:52:58,284 - mmdet - INFO - Epoch [1][600/1445] lr: 2.000e-02, eta: 3:27:43, time: 0.374, data_time: 0.003, memory: 5616, loss_rpn_cls: 0.1819, loss_rpn_bbox: 0.1232, loss_cls: 0.7582, acc: 88.1094, loss_bbox: 0.8542, loss_instance: 1.6824, loss_semantic: 0.7272, loss: 4.3272, grad_norm: 6.2210
2021-05-25 09:53:13,881 - mmdet - INFO - Epoch [1][650/1445] lr: 2.000e-02, eta: 3:25:04, time: 0.312, data_time: 0.003, memory: 5616, loss_rpn_cls: 10.1637, loss_rpn_bbox: 6.9990, loss_cls: 34.9405, acc: 91.4102, loss_bbox: 13.7452, loss_instance: 234.7180, loss_semantic: 7664.2850, loss: 7964.8515, grad_norm: 340497.7997
2021-05-25 09:53:27,481 - mmdet - INFO - Epoch [1][700/1445] lr: 2.000e-02, eta: 3:21:08, time: 0.272, data_time: 0.003, memory: 5616, loss_rpn_cls: 35289.9099, loss_rpn_bbox: 44193.1067, loss_cls: 875153.7309, acc: 94.6094, loss_bbox: 1115768.8280, loss_instance: 22061.8204, loss_semantic: 499100.5092, loss: 2591567.9840, grad_norm: 112257859.0582
2021-05-25 09:53:41,763 - mmdet - INFO - Epoch [1][750/1445] lr: 2.000e-02, eta: 3:18:13, time: 0.286, data_time: 0.003, memory: 5616, loss_rpn_cls: 20893164722860.7344, loss_rpn_bbox: 7885642268278.1553, loss_cls: 1349368483407166.5000, acc: 86.1094, loss_bbox: 627283726521955.8750, loss_instance: 4757506602120.5703, loss_semantic: 3351921805304.0806, loss: 2013540377899481.0000, grad_norm: 117931859697361120.0000
2021-05-25 09:53:56,211 - mmdet - INFO - Epoch [1][800/1445] lr: 2.000e-02, eta: 3:15:45, time: 0.289, data_time: 0.003, memory: 5616, loss_rpn_cls: 52192068187424022801154048.0000, loss_rpn_bbox: 61156605795124256318685184.0000, loss_cls: 12385031694969291828216463360.0000, acc: 27.7930, loss_bbox: 2810178410764419893789982720.0000, loss_instance: 320446566729177349750784.0000, loss_semantic: 7193256245805316776656896.0000, loss: 15316072270766823574903717888.0000, grad_norm: inf

Thanks for your watching, I'm looking forward for your answering :)

Hi, @zhanggang001 ,I‘m still a beginning level learner and I have a question that don't know how to fix it.
Without changing the RefineMask config(except DataRoot and num_classes),the loss suddenly boom in epoch1 or epoch2

2021-05-25 09:52:39,591 - mmdet - INFO - Epoch [1][550/1445] lr: 2.000e-02, eta: 3:27:36, time: 0.354, data_time: 0.002, memory: 5616, loss_rpn_cls: 0.1940, loss_rpn_bbox: 0.1124, loss_cls: 0.6903, acc: 89.3125, loss_bbox: 0.7711, loss_instance: 1.7145, loss_semantic: 1.1350, loss: 4.6172, grad_norm: 49.7330
2021-05-25 09:52:58,284 - mmdet - INFO - Epoch [1][600/1445] lr: 2.000e-02, eta: 3:27:43, time: 0.374, data_time: 0.003, memory: 5616, loss_rpn_cls: 0.1819, loss_rpn_bbox: 0.1232, loss_cls: 0.7582, acc: 88.1094, loss_bbox: 0.8542, loss_instance: 1.6824, loss_semantic: 0.7272, loss: 4.3272, grad_norm: 6.2210
2021-05-25 09:53:13,881 - mmdet - INFO - Epoch [1][650/1445] lr: 2.000e-02, eta: 3:25:04, time: 0.312, data_time: 0.003, memory: 5616, loss_rpn_cls: 10.1637, loss_rpn_bbox: 6.9990, loss_cls: 34.9405, acc: 91.4102, loss_bbox: 13.7452, loss_instance: 234.7180, loss_semantic: 7664.2850, loss: 7964.8515, grad_norm: 340497.7997
2021-05-25 09:53:27,481 - mmdet - INFO - Epoch [1][700/1445] lr: 2.000e-02, eta: 3:21:08, time: 0.272, data_time: 0.003, memory: 5616, loss_rpn_cls: 35289.9099, loss_rpn_bbox: 44193.1067, loss_cls: 875153.7309, acc: 94.6094, loss_bbox: 1115768.8280, loss_instance: 22061.8204, loss_semantic: 499100.5092, loss: 2591567.9840, grad_norm: 112257859.0582
2021-05-25 09:53:41,763 - mmdet - INFO - Epoch [1][750/1445] lr: 2.000e-02, eta: 3:18:13, time: 0.286, data_time: 0.003, memory: 5616, loss_rpn_cls: 20893164722860.7344, loss_rpn_bbox: 7885642268278.1553, loss_cls: 1349368483407166.5000, acc: 86.1094, loss_bbox: 627283726521955.8750, loss_instance: 4757506602120.5703, loss_semantic: 3351921805304.0806, loss: 2013540377899481.0000, grad_norm: 117931859697361120.0000
2021-05-25 09:53:56,211 - mmdet - INFO - Epoch [1][800/1445] lr: 2.000e-02, eta: 3:15:45, time: 0.289, data_time: 0.003, memory: 5616, loss_rpn_cls: 52192068187424022801154048.0000, loss_rpn_bbox: 61156605795124256318685184.0000, loss_cls: 12385031694969291828216463360.0000, acc: 27.7930, loss_bbox: 2810178410764419893789982720.0000, loss_instance: 320446566729177349750784.0000, loss_semantic: 7193256245805316776656896.0000, loss: 15316072270766823574903717888.0000, grad_norm: inf

Thanks for your watching, I'm looking forward for your answering :)

Hello, I have the same problem as you. Have you solved this problem? We can communicate and look forward to your reply. Thank you!

Hi, @zhanggang001 ,I‘m still a beginning level learner and I have a question that don't know how to fix it.
Without changing the RefineMask config(except DataRoot and num_classes),the loss suddenly boom in epoch1 or epoch2

2021-05-25 09:52:39,591 - mmdet - INFO - Epoch [1][550/1445] lr: 2.000e-02, eta: 3:27:36, time: 0.354, data_time: 0.002, memory: 5616, loss_rpn_cls: 0.1940, loss_rpn_bbox: 0.1124, loss_cls: 0.6903, acc: 89.3125, loss_bbox: 0.7711, loss_instance: 1.7145, loss_semantic: 1.1350, loss: 4.6172, grad_norm: 49.7330
2021-05-25 09:52:58,284 - mmdet - INFO - Epoch [1][600/1445] lr: 2.000e-02, eta: 3:27:43, time: 0.374, data_time: 0.003, memory: 5616, loss_rpn_cls: 0.1819, loss_rpn_bbox: 0.1232, loss_cls: 0.7582, acc: 88.1094, loss_bbox: 0.8542, loss_instance: 1.6824, loss_semantic: 0.7272, loss: 4.3272, grad_norm: 6.2210
2021-05-25 09:53:13,881 - mmdet - INFO - Epoch [1][650/1445] lr: 2.000e-02, eta: 3:25:04, time: 0.312, data_time: 0.003, memory: 5616, loss_rpn_cls: 10.1637, loss_rpn_bbox: 6.9990, loss_cls: 34.9405, acc: 91.4102, loss_bbox: 13.7452, loss_instance: 234.7180, loss_semantic: 7664.2850, loss: 7964.8515, grad_norm: 340497.7997
2021-05-25 09:53:27,481 - mmdet - INFO - Epoch [1][700/1445] lr: 2.000e-02, eta: 3:21:08, time: 0.272, data_time: 0.003, memory: 5616, loss_rpn_cls: 35289.9099, loss_rpn_bbox: 44193.1067, loss_cls: 875153.7309, acc: 94.6094, loss_bbox: 1115768.8280, loss_instance: 22061.8204, loss_semantic: 499100.5092, loss: 2591567.9840, grad_norm: 112257859.0582
2021-05-25 09:53:41,763 - mmdet - INFO - Epoch [1][750/1445] lr: 2.000e-02, eta: 3:18:13, time: 0.286, data_time: 0.003, memory: 5616, loss_rpn_cls: 20893164722860.7344, loss_rpn_bbox: 7885642268278.1553, loss_cls: 1349368483407166.5000, acc: 86.1094, loss_bbox: 627283726521955.8750, loss_instance: 4757506602120.5703, loss_semantic: 3351921805304.0806, loss: 2013540377899481.0000, grad_norm: 117931859697361120.0000
2021-05-25 09:53:56,211 - mmdet - INFO - Epoch [1][800/1445] lr: 2.000e-02, eta: 3:15:45, time: 0.289, data_time: 0.003, memory: 5616, loss_rpn_cls: 52192068187424022801154048.0000, loss_rpn_bbox: 61156605795124256318685184.0000, loss_cls: 12385031694969291828216463360.0000, acc: 27.7930, loss_bbox: 2810178410764419893789982720.0000, loss_instance: 320446566729177349750784.0000, loss_semantic: 7193256245805316776656896.0000, loss: 15316072270766823574903717888.0000, grad_norm: inf

Thanks for your watching, I'm looking forward for your answering :)

Hello, I have the same problem as you. Have you solved this problem? We can communicate and look forward to your reply. Thank you!

I set the learning rate=0.002, then loss becomes normal, but the effect is not the best. I am not sure how else to correct it.

Hi, @zhanggang001 ,I‘m still a beginning level learner and I have a question that don't know how to fix it.
Without changing the RefineMask config(except DataRoot and num_classes),the loss suddenly boom in epoch1 or epoch2

2021-05-25 09:52:39,591 - mmdet - INFO - Epoch [1][550/1445] lr: 2.000e-02, eta: 3:27:36, time: 0.354, data_time: 0.002, memory: 5616, loss_rpn_cls: 0.1940, loss_rpn_bbox: 0.1124, loss_cls: 0.6903, acc: 89.3125, loss_bbox: 0.7711, loss_instance: 1.7145, loss_semantic: 1.1350, loss: 4.6172, grad_norm: 49.7330
2021-05-25 09:52:58,284 - mmdet - INFO - Epoch [1][600/1445] lr: 2.000e-02, eta: 3:27:43, time: 0.374, data_time: 0.003, memory: 5616, loss_rpn_cls: 0.1819, loss_rpn_bbox: 0.1232, loss_cls: 0.7582, acc: 88.1094, loss_bbox: 0.8542, loss_instance: 1.6824, loss_semantic: 0.7272, loss: 4.3272, grad_norm: 6.2210
2021-05-25 09:53:13,881 - mmdet - INFO - Epoch [1][650/1445] lr: 2.000e-02, eta: 3:25:04, time: 0.312, data_time: 0.003, memory: 5616, loss_rpn_cls: 10.1637, loss_rpn_bbox: 6.9990, loss_cls: 34.9405, acc: 91.4102, loss_bbox: 13.7452, loss_instance: 234.7180, loss_semantic: 7664.2850, loss: 7964.8515, grad_norm: 340497.7997
2021-05-25 09:53:27,481 - mmdet - INFO - Epoch [1][700/1445] lr: 2.000e-02, eta: 3:21:08, time: 0.272, data_time: 0.003, memory: 5616, loss_rpn_cls: 35289.9099, loss_rpn_bbox: 44193.1067, loss_cls: 875153.7309, acc: 94.6094, loss_bbox: 1115768.8280, loss_instance: 22061.8204, loss_semantic: 499100.5092, loss: 2591567.9840, grad_norm: 112257859.0582
2021-05-25 09:53:41,763 - mmdet - INFO - Epoch [1][750/1445] lr: 2.000e-02, eta: 3:18:13, time: 0.286, data_time: 0.003, memory: 5616, loss_rpn_cls: 20893164722860.7344, loss_rpn_bbox: 7885642268278.1553, loss_cls: 1349368483407166.5000, acc: 86.1094, loss_bbox: 627283726521955.8750, loss_instance: 4757506602120.5703, loss_semantic: 3351921805304.0806, loss: 2013540377899481.0000, grad_norm: 117931859697361120.0000
2021-05-25 09:53:56,211 - mmdet - INFO - Epoch [1][800/1445] lr: 2.000e-02, eta: 3:15:45, time: 0.289, data_time: 0.003, memory: 5616, loss_rpn_cls: 52192068187424022801154048.0000, loss_rpn_bbox: 61156605795124256318685184.0000, loss_cls: 12385031694969291828216463360.0000, acc: 27.7930, loss_bbox: 2810178410764419893789982720.0000, loss_instance: 320446566729177349750784.0000, loss_semantic: 7193256245805316776656896.0000, loss: 15316072270766823574903717888.0000, grad_norm: inf

Thanks for your watching, I'm looking forward for your answering :)

Do you run RefineMask on your own dataset? If yes, please check your data, especially for those images without objects.

Hi, @zhanggang001 ,I‘m still a beginning level learner and I have a question that don't know how to fix it.
Without changing the RefineMask config(except DataRoot and num_classes),the loss suddenly boom in epoch1 or epoch2

2021-05-25 09:52:39,591 - mmdet - INFO - Epoch [1][550/1445] lr: 2.000e-02, eta: 3:27:36, time: 0.354, data_time: 0.002, memory: 5616, loss_rpn_cls: 0.1940, loss_rpn_bbox: 0.1124, loss_cls: 0.6903, acc: 89.3125, loss_bbox: 0.7711, loss_instance: 1.7145, loss_semantic: 1.1350, loss: 4.6172, grad_norm: 49.7330
2021-05-25 09:52:58,284 - mmdet - INFO - Epoch [1][600/1445] lr: 2.000e-02, eta: 3:27:43, time: 0.374, data_time: 0.003, memory: 5616, loss_rpn_cls: 0.1819, loss_rpn_bbox: 0.1232, loss_cls: 0.7582, acc: 88.1094, loss_bbox: 0.8542, loss_instance: 1.6824, loss_semantic: 0.7272, loss: 4.3272, grad_norm: 6.2210
2021-05-25 09:53:13,881 - mmdet - INFO - Epoch [1][650/1445] lr: 2.000e-02, eta: 3:25:04, time: 0.312, data_time: 0.003, memory: 5616, loss_rpn_cls: 10.1637, loss_rpn_bbox: 6.9990, loss_cls: 34.9405, acc: 91.4102, loss_bbox: 13.7452, loss_instance: 234.7180, loss_semantic: 7664.2850, loss: 7964.8515, grad_norm: 340497.7997
2021-05-25 09:53:27,481 - mmdet - INFO - Epoch [1][700/1445] lr: 2.000e-02, eta: 3:21:08, time: 0.272, data_time: 0.003, memory: 5616, loss_rpn_cls: 35289.9099, loss_rpn_bbox: 44193.1067, loss_cls: 875153.7309, acc: 94.6094, loss_bbox: 1115768.8280, loss_instance: 22061.8204, loss_semantic: 499100.5092, loss: 2591567.9840, grad_norm: 112257859.0582
2021-05-25 09:53:41,763 - mmdet - INFO - Epoch [1][750/1445] lr: 2.000e-02, eta: 3:18:13, time: 0.286, data_time: 0.003, memory: 5616, loss_rpn_cls: 20893164722860.7344, loss_rpn_bbox: 7885642268278.1553, loss_cls: 1349368483407166.5000, acc: 86.1094, loss_bbox: 627283726521955.8750, loss_instance: 4757506602120.5703, loss_semantic: 3351921805304.0806, loss: 2013540377899481.0000, grad_norm: 117931859697361120.0000
2021-05-25 09:53:56,211 - mmdet - INFO - Epoch [1][800/1445] lr: 2.000e-02, eta: 3:15:45, time: 0.289, data_time: 0.003, memory: 5616, loss_rpn_cls: 52192068187424022801154048.0000, loss_rpn_bbox: 61156605795124256318685184.0000, loss_cls: 12385031694969291828216463360.0000, acc: 27.7930, loss_bbox: 2810178410764419893789982720.0000, loss_instance: 320446566729177349750784.0000, loss_semantic: 7193256245805316776656896.0000, loss: 15316072270766823574903717888.0000, grad_norm: inf

Thanks for your watching, I'm looking forward for your answering :)

Hello, I have the same problem as you. Have you solved this problem? We can communicate and look forward to your reply. Thank you!

I set the learning rate=0.002, then loss becomes normal, but the effect is not the best. I am not sure how else to correct it.

Thank you, I set the rate to 0.002, the problem is solved