SthPhoenix / InsightFace-REST

InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

how to start

alicera opened this issue · comments

how to use insight2onnx.py

python src/converters/modules/converters/insight2onnx.py

Traceback (most recent call last):
File "src/converters/modules/converters/insight2onnx.py", line 13, in
from .mx2onnx_conv import onnx as onnx_mxnet
ModuleNotFoundError: No module named 'main.mx2onnx_conv'; 'main' is not a package

Please use src/converters/build_insight_trt.py as example.

Is your cuda 11?

Yes, by default docker image is build upon TensorRT 20.09 docker image, which uses cuda 11 and require 450x nvidia drivers.
If you want cuda 10.2, you can build upon 20.03 image.

ok, thanks you
Do you try the new model for Glint360k ?
https://github.com/deepinsight/insightface/tree/master/recognition/partial_fc

AFAIK model wasn't released yet, I have only tested Sub-center ArcFace

I try to run the "python src/converters/build_insight_trt.py" in the docker that you provide and it show the message.
Is the message normal?

mxnet version: 1.7.0
onnx version: 1.7.0
Converting MXNet model to ONNX...
[15:18:03] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v1.5.0. Attempting to upgrade...
[15:18:03] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded!
WARNING:root:Using an experimental ONNX operator: Crop. Its definition can change.
WARNING:root:Using an experimental ONNX operator: Crop. Its definition can change.
Building TensorRT engine...
[TensorRT] WARNING: [TRT]/home/jenkins/workspace/OSS/L0_MergeRequest/oss/parsers/onnx/onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: No implementation obeys reformatting-free rules, at least 13 reformatting nodes are needed, now picking the fastest path instead.
TensorRT model ready.

First TensorRT warning always fires for insightface models since they are trained using fp64 weights, it's normal.

Second warning fires if TensorRT detects that your GPU has fast FP16 support and builds engine with FP16 support. Haven't seen any side effects of this warning, so it's normal too.

Also looks like you are using older version of this repo, since in latest version it logs when FP16 is used.

Does it support 2d106det?
How use it?

This repository contains code to convert 2d106det model, but inference code is not yet implemented, since right now it have no actual use for face detection/recognition pipeline implemented in REST API.

ok, thanks you
Do you try the new model for Glint360k ?
https://github.com/deepinsight/insightface/tree/master/recognition/partial_fc

Added support for Glint360k models.

Closing for long inactivity.