SthPhoenix / InsightFace-REST

InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

performance drop

jeffryuiop opened this issue · comments

hi, thank you for sharing your work.

I just checked that there might be a drop in the accuracy performance after testing it on LFW, where it can only achieve around 90% after conversion to onnx. Also the recall is now around 86%

Hi! That's interesting, could you provide your testing code and more info about models you have tested?

So i just used the test pairs .txt file provided in their website and then run your onnx arcface model that was produced by running your docker using onnxruntime. and then check the results on various thresholds.

I noticed that you converted the model from mxnet. would there be a difference in the result if we convert the pytorch model into onnx?

Have you tested same code with backend set to mxnet?
The accuracy drop you mentioned is way too large, it might be caused by improper validation, as well as some massive bug in my code, but such accuracy drop should be easily noticable in most use cases.

I haven't tested torch models yet.

Okay let me double check and re-run the code then.

Any updates?

Hi, sorry for the late update, I still find the same result for mobilenet retinaface + arcface resnet100. However. yesterday I just tried the onnx models that was uploaded in insightface (antelope) containing scrfd 10g and arcface resnet100 glint, and I got 0.9985 accuracy on threshold==0.26 on LFW test set. So, I think the newer models are better.

Are you sure that correct preprocessing was used for older models? If I understand you right, you used onnx models in your own code?

It is certainly a possibility that I screwed up somewhere. I tried to translate the python code to c++ version and ran a test on it. One suspicion that I have is that I installed the wrong onnx/mxnet version because I did not use your docker. Anyway, I think the issue is solved now. So I am going to close this issue.