AXERA-TECH / ax-samples

Samples code for world class Artificial Intelligence SoCs for computer vision applications.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Evaluate exported `.joint` object detection model on COCO

mikel-brostrom opened this issue · comments

commented

I want to run inference on my exported INT8 .joint model locally (on my laptop). The idea here is to evaluate its mAP performance. I only see one way of doing this:

pulsar run\
    my_model.joint\
    --input resnet18_export_data/images/cat.jpg\
    --output_gt inference_results

And then read the generated .npy file. I want to avoid reading/writing from file during evaluation as it is a very time consuming operation. Is there a way of running inference using python? I don't see anybody wanting to deploy a model without knowing its performance... How do you calculated the performance of your exported INT8 .joint models?

Simulation running speed is very slow, we usually only run one picture to test on X86. Accuracy evaluation such as mAP? We all with do it on the board.

commented

Could you provide INT8 model COCO results so that people know what level of performance degradation is to be expected on the quantized models @BUG1989 ?