kuangliu / pytorch-retinanet

RetinaNet in PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

loc_loss: 0.000 | cls_loss: 1.000 | train_loss: 1.697 | avg_loss: 1.697

Imagery007 opened this issue · comments

$ python train.py --lr 0.001
==> Preparing data..

Epoch: 0
/home/zhangchaohua/envs/pytorch/lib/python2.7/site-packages/torch/nn/functional.py:2351: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
/home/zhangchaohua/envs/pytorch/lib/python2.7/site-packages/torch/nn/functional.py:2423: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
/home/zhangchaohua/envs/pytorch/lib/python2.7/site-packages/torch/nn/_reduction.py:49: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
loc_loss: 0.000 | cls_loss: 1.000 | train_loss: 1.697 | avg_loss: 1.697
loc_loss: 0.000 | cls_loss: 1.000 | train_loss: 1.391 | avg_loss: 1.544
loc_loss: 0.000 | cls_loss: 1.000 | train_loss: 1.472 | avg_loss: 1.520
loc_loss: 0.000 | cls_loss: 1.000 | train_loss: 1.385 | avg_loss: 1.486
loc_loss: 0.000 | cls_loss: 1.000 | train_loss: 1.475 | avg_loss: 1.484
loc_loss: 0.000 | cls_loss: 1.000 | train_loss: 1.300 | avg_loss: 1.453
loc_loss: 0.000 | cls_loss: 1.000 | train_loss: 1.414 | avg_loss: 1.448
loc_loss: 0.000 | cls_loss: 1.000 | train_loss: 1.418 | avg_loss: 1.444
loc_loss: 0.000 | cls_loss: 1.000 | train_loss: 1.468 | avg_loss: 1.447
loc_loss: 0.000 | cls_loss: 1.000 | train_loss: 1.393 | avg_loss: 1.441
loc_loss: 0.000 | cls_loss: 1.000 | train_loss: 1.466 | avg_loss: 1.444
loc_loss: 0.000 | cls_loss: 1.000 | train_loss: 1.588 | avg_loss: 1.456

I just trained, why is my loss like this? I am using Pytorch-1.0.1.

batch-size=8,After training for an hour:
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.485 | avg_loss: 0.577
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.497 | avg_loss: 0.577
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.471 | avg_loss: 0.577
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.451 | avg_loss: 0.576
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.547 | avg_loss: 0.576
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.447 | avg_loss: 0.576
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.560 | avg_loss: 0.576
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.496 | avg_loss: 0.576
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.498 | avg_loss: 0.576
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.366 | avg_loss: 0.576
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.594 | avg_loss: 0.576
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.652 | avg_loss: 0.576
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.628 | avg_loss: 0.576
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.529 | avg_loss: 0.576
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.428 | avg_loss: 0.576
loc_loss: 0.000 | cls_loss: 0.000 | train_loss: 0.447 | avg_loss: 0.576
What happened?

I solved the problem with his loss.py. It looks normal now。
#52 (comment)