AntonMu / TrainYourOwnYOLO

Train a state-of-the-art yolov3 object detector from scratch!

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Validation Loss not decreasing?

johnjhr opened this issue · comments

Before filing a report consider the following two questions:

Have you followed all Readme instructions exactly?

Have you checked the troubleshooting section?

Have you looked for similar issues?

Once you are familiar with the code, you're welcome to modify it. Please only continue to file a bug report if you encounter an issue with the provided code and after having followed the instructions.

If you have followed the instructions exactly, couldn't solve your problem with the provided troubleshooting tips and would still like to file a bug or make a feature requests please follow the steps below.

  1. It must be a bug, a feature request, or a significant problem with the documentation (for small docs fixes please send a PR instead).
  2. Every section of the form below must be filled out.

Readme

  • I have followed all Readme instructions carefully: Yes

Troubleshooting Section

Troubleshooting Section

Describe the problem

Training and Inference works fine.. but i cant find a way to decrease the validation loss. If i train for default(51) iterations (20mins) (with 50 images) the loss is around 15. Also when i train for 4000 iterations (200 images) its loss is around 15. Is this a problem ?

2 Classes, 200 images each.

Screenshot (44)

Hi @johnjaiharjose ,

15 is a pretty decent value for two classes. Even in the provided example the loss only goes down to about 13. I would have a close look at your inference results and compare their quality. The loss is just a proxy but ultimately, you would want to check if your model is good for the task you had in mind. Likely, you'll have to define a new metric to measure success.

Hope this helps!

Hi @johnjaiharjose ,

15 is a pretty decent value for two classes. Even in the provided example the loss only goes down to about 13. I would have a close look at your inference results and compare their quality. The loss is just a proxy but ultimately, you would want to check if your model is good for the task you had in mind. Likely, you'll have to define a new metric to measure success.

Hope this helps!

yeah it works very good for what i wanted it to do.. Using it for a pick and place application for robotic arm.
Also one more doubt, how distinct should the training and testing images be ?

Glad to hear!

There is no general rule of how similar they should be. Ideally, your test images should be as similar to the train images as possible. So you could for instance make a random split. However, if you don't expect to do much hyperparameter tuning (model iterations), it may be beneficial to just use more data for training.

Ultimately, it really depends. Of course, if you want to publish a paper on your results, you have to be more careful about all these things. But if you just want something that works you can do whatever works.

Also, one comment about your earlier example. It is expected that if you use more images your loss will be higher. A model trained on 200 images with a validation loss of 15 is much better than a model trained on 50 images with the same loss. The default validation split in this model is 10%. Therefore, in the first case your validation loss is only computed on 5 images and in the second case on 20. Of course, it is much easier to have a low loss on just 5 images compared to 20.