open-mmlab / mmyolo

OpenMMLab YOLO series toolbox and benchmark. Implemented RTMDet, RTMDet-Rotated,YOLOv5, YOLOv6, YOLOv7, YOLOv8,YOLOX, PPYOLOE, etc.

Home Page:https://mmyolo.readthedocs.io/zh_CN/dev/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Got all evaluation results -1 on custom dateset.

RalphGuo opened this issue · comments

Prerequisite

💬 Describe the reimplementation questions

I tried to train my own data using Yolov5, however, in every validation stage, I got following results, all of them are -1.

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000
...
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
11/08 16:40:01 - mmengine - INFO - bbox_mAP_copypaste: -1.000 -1.000 -1.000 -1.000 -1.000 -1.000

I use my training data for validation and it's the same, so I dont think it's due to a bad training;
I checked my label.json, the area is normal;

Here is my config, Im new to openMMLab, so I only changed a little from tutorial of balloon det, and also there's only 1 category in my data.

base = './yolov5_s-v61_syncbn_fast_8xb16-300e_coco.py'
data_root = '../Dataset/uf_easy/'
img_scale = (640, 640)
deepen_factor = 0.33
widen_factor = 0.5
max_epochs = 300

metainfo = {
'CLASSES': ('uf', ),
'PALETTE': [
(220, 20, 60),
]
}

train_dataloader = dict(
batch_size=train_batch_size_per_gpu,
num_workers=train_num_worker,
dataset=dict(
data_root=data_root,
metainfo=metainfo,
data_prefix=dict(img='train/'),
ann_file='train.json'))
val_dataloader = dict(
batch_size=train_batch_size_per_gpu,
num_workers=train_num_worker,
dataset=dict(
data_root=data_root,
metainfo=metainfo,
data_prefix=dict(img='train/'),
ann_file='train.json'))
test_dataloader = val_dataloader
val_evaluator = dict(ann_file=data_root + 'train.json')
test_evaluator = val_evaluator
model = dict(bbox_head=dict(head_module=dict(num_classes=1)))

Environment

TorchVision: 0.10.0+cu111
OpenCV: 4.5.3
MMEngine: 0.1.0

Expected results

No response

Additional information

No response

@RalphGuo

  • COCO Dataset, AP or AR = -1
    1. According to the definition of COCO dataset, the small and medium areas in an image are less than 1024 (32*32), 9216 (96*96), respectively.
    2. If the corresponding area has no object, the result of AP and AR will set to -1.

2. the corresponding area has no object, the result of AP and AR will set to -1.

@hhaAndroid Hi, thanks for the quick reply.
i. I checked my json file, the areas are normal, it range from 7k~110k, so I dont think that's the problem;
ii. As I posted, I use the same data in train_dataloader and val_dataloader/test_dataloader, I should've got a beautiful evaluation result right? Is there any chance that the correspounding area has no obj on this condition ?

@RalphGuo This situation is indeed a bit strange. Did you find out why?