Evaluation metrics explanation
ojasvijain opened this issue · comments
Hi Team,
I was wondering how you are computing the metrics for evaluation. I was going through metrics.py
file and came across ap_per_class
function which seems to be computing the average precision for each class in an image. (FYI - my custom dataset only has 1 class with a lot of objects of that class in a single image)
I wanted to understand what *stats
is (the parameter passed in the function) in test.py
? And how does it help in being able to assign a predicted class to a ground truth?
Also,
I wanted to know how you are associating a particular predicted class with a ground truth? Is it solely based on the highest iou values? If yes, what if you assign a ground truth to a particular predicted class (and eliminate it from the iteration once it is assigned) and find a higher iou to another predicted class further down the iteration?
Thanks!
Hi, thanks!
DynamicDet's codebase is yolov7. So, recommend to ask these issue in yolov7
Thanks for your response. I just wanted to understand what is happening in the backend for DynamicDet - specifically with regards to the metrics.py
and test.py
files.
If I run the model on test data, how can I access the precision, recall etc scores?