SthPhoenix / InsightFace-REST

InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Thank you for excllent work. How about TRT batch inference?

tungdq212 opened this issue · comments

Thank you for excllent work.

Detection models now can be exported to TRT engine with batch size > 1 - inference code doesn't support it yet, though now they could be used in Triton Inference Server without issues.

Is there any plan for this? Or how can I implement batch inference myself?

Hi! Batch inference is already supported for all recognition models and for SCRFD and YOLOv5 family detection models.