ultralytics / JSON2YOLO

Convert JSON annotations into YOLO format.

Home Page:https://docs.ultralytics.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Explanation of results

the-it-weirdo opened this issue · comments

Hello, I trained a YOLOv8 model with 53 classes (all belonging to indoor environments) selected from the MS-COCO dataset. I trained the model for a 100 epochs with default settings. And these are the results I received.

training_results

Can someone please explain the results to me?

This is what I understand:

  • The mAP50 metrics is at 0.469. This means that the model is correctly identifying and localizing the objects about 46.9% of the time when a 50% overlap with the ground truth bounding boxes is considered a correct detection.
  • The mAP50-95, which is the average measure of model’s performance across IoU thresholds from 0.50 to 0.95 is at 0.331. This means the performance of the model drops when a stricter localization criterion is applied. This is a common issue because it is more challenging to have a high degree of overlap for correct detections, but it shows that the model has room for improvement in terms of precision of bounding box predictions.
  • In object detection, especially with a large number of classes (53 in this case), achieving high mAP values can be challenging. The mAP at IoU=0.5 is decent, suggesting that the model can detect objects with a fair amount of accuracy when a lower threshold for overlap is set.
  • The box loss is the bounding box regression loss which measures the error in predicted bounding box compared to the ground truth. Lower box loss means the predicted bounding boxes are more accurate. The training loss for box is 1.11 and validation is 1.125.
  • The classification loss (cls_loss) measures the error in the predicted class probabilities for each object in the image compared to the ground truth. Lower classification loss means the model is more accurately predicting the class of an object. The classification loss is 1.175 for training and 1.227 for validation.
  • The deformable convolutional layer loss (dfl_loss) measures the error in deformable convolutional layers, which are designed to improve model’s ability to detect objects with various scales and aspect ratios. A lower dfl_loss indicates that the model is better at handling object deformations and variations in appearance. The dfl loss for training is 1.179 and validation is 1.166.
  • All three losses are decreasing over epochs, which is a good sign indicating that the model is learning.
  • There's a significant drop early in training (before epoch 5), followed by a plateau, which is common as the model starts to converge.
  • The patterns for the validation loss are similar to the training losses, but the validation losses are generally higher than the training losses.

Did I miss anything in my understanding of the results? Can I improve the results? If so, how?

Hello,

Thank you for your reply and suggestions. Apologies on the late response on my part. However, is there a way we can add axis titles to the graphs generated during training or is there a way to download the graphs?

Hello @pderrenger

Thank you for the guide on Tensorboard. I appreciate it.

I have already performed a training run using Ultralytics cloud training. And the graph I posted earlier was a screenshot from the training results in the dashboard. I was wondering if there's a better way to download the graphs instead of screenshots and if I could add axis titles to them. Thank you 😊

Hello, where can I find the option to download the logs from the cloud dashboard? I used cloud training in Ultralytics hub: uploaded the dataset, selected a model and trained it.

Hello, I am unable to find the a "Download logs" or a similar option in the cloud dashboard.

This is what tensorboard shows me when I tried to use the "Share" url:
image

I tried making the url public and still there was no data.