How can I evaluate model inference on CPU?
Hello-Hyuk opened this issue · comments
How can I evaluate model inference on CPU?
I have no idea to evaluate.
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
Hello-Hyuk opened this issue · comments
How can I evaluate model inference on CPU?
I have no idea to evaluate.