huangyangyu / SeqFace

SeqFace : Making full use of sequence information for face recognition

Home Page:https://arxiv.org/pdf/1803.06524.pdf

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Use spherefacce code test your model

deepage opened this issue · comments

Thank your share the nice job.
I have download your res-27 model,and test it on LFW,but only get 99.48%,maybe something wrong when I align photos.
I use MTCNN detect all photos,and align it with this
coord5point = [ 46.29460144, 59.69630051;
81.53179932, 59.50139999;
64.02519989, 79.73660278;
49.54930115, 100.3655014 ;
78.72990417, 100.20410156];
then crop to be 128x128.
Am I right? Or problem is here?
At the end,I use evaluation code in sphereface,and can get accuracy 99.48%(with image flip),without image flip it can get 99.43%.
Can you give me some advise about how should I do?
BTW,I also try your python code norml2_sim,but can get the same result.
Thanks very much!

commented

I guess the alignment method may be different. How do you align face through the key points? We share the alignment method in util.py file. You can try our test script, which using aligned faces.

The alignment is introduced in https://github.com/AlfredXiangWu/face_verification_experiment , we use the same method (but with RGB images).