SthPhoenix / InsightFace-REST

InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Triton inference

gulldan opened this issue · comments

commented

Hello,

I'm trying to use Triton inf server (tritonserver:21.04-py3) with trt plan from your project. Model - retinaface

Your TRT backend works perfect.
docker run -p 18081:18080 -d --gpus 0 -e LOG_LEVEL=INFO -e PYTHONUNBUFFERED=0 -e NUM_WORKERS=1 -e INFERENCE_BACKEND=trt -e FORCE_FP16=True -e DET_NAME=retinaface_r50_v1 -e DET_THRESH=0.6 -e REC_NAME=glint360k_r100FC_1.0 -e REC_IGNORE=False -e REC_BATCH_SIZE=64 -e GA_NAME=genderage_v1 -e GA_IGNORE=False -e KEEP_ALL=True -e MAX_SIZE=1024,780 -e DEF_RETURN_FACE_DATA=True -e DEF_EXTRACT_EMBEDDING=True -e DEF_EXTRACT_GA=True -e DEF_API_VER='1'

But if I try to transfer your processing from
https://github.com/SthPhoenix/InsightFace-REST/blob/master/src/api_trt/modules/model_zoo/detectors/retinaface.py#L268
like postprocessing Triton's result.

dw and dh just empty list for stride16

Would you recommend something?

Triton conf
triton_model_config.zip

Jupyter notebook
triton_test.zip

I have quickly read your .ipynb, can't say anything right now, I'll check my local triton branch later.
Though I have noticed you are using strange image preprocessing in preprocess_image:

# HWC to CHW format:
image -= (104, 117, 123)

Such preprocessing isn't required for RetinaFace.

commented

I have quickly read your .ipynb, can't say anything right now, I'll check my local triton branch later.
Though I have noticed you are using strange image preprocessing in preprocess_image:
image -= (104, 117, 123)

del this row, I forgot.

I use https://netron.app for debug converted ONNX
image

I'll upload up-to-date Triton backend in day or two, which you can use as reference for your experiments.

In your Triton config outputs seems to be ok, thought I 'd check that Triton returns outputs exactly in the same order as required for Retinaface

I have updated Triton backend. Keep in mind that communication with Triton server from python introduces large delays and greatly reduces performance.
You may notice commented lines related to using CUDA shared memory, it may help you gain some performance back, though use it at your own risk )

commented

thank you

Potential delays can be attributed to several things

  1. FP32 - I was unable to get triton to treat the converted model as FP16. I think int8 should speed up the inference but need to correctly convert and calibrate the model.
  2. All processing needs to be transferred to Triton. For image preprocessing, I found DALI. Post-processing is more complicated because it requires transferring the NMS and something else for the retinaface. It seems that I found a plug-in from NVIDIA somewhere, but embedding it into the pipeline is not a trivial task for me, at the moment)
  3. following from the last point, you need to immediately do in triton ensamble, so as not to drive data back and forth.

Here's what a performance study looks like to me. But this is a rather complicated process.

Have you tried any of these?

  1. I have noticed that Triton for some reason states that model is fp32, but if you compare actual performance of fp32 and fp16 models with Triton perf client difference is obvious.
  2. I haven't tested it yet since it requires lot of changes to source code, but since now Triton supports python backend and DALI preprocessing it's really worth a try.
  3. It'll be not trivial to put all face detection/recognition pipeline into Triton, but it's very promising, especially considering that some parts of pipeline could be replaced with c++.
commented
  1. I have noticed that Triton for some reason states that model is fp32, but if you compare actual performance of fp32 and fp16 models with Triton perf client difference is obvious.
  2. I haven't tested it yet since it requires lot of changes to source code, but since now Triton supports python backend and DALI preprocessing it's really worth a try.
  3. It'll be not trivial to put all face detection/recognition pipeline into Triton, but it's very promising, especially considering that some parts of pipeline could be replaced with c++.

Thanks,
I will wait for such updates if it intersects with your interests.

close

Thanks,
I will wait for such updates if it intersects with your interests.

close

I'm definitely interested in such updates, though it might take a while )