SthPhoenix / InsightFace-REST

InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

这是个bug吗

json007 opened this issue · comments

INFO: Uvicorn running on http://0.0.0.0:18080 (Press CTRL+C to quit)
[15:44:52] INFO - Uvicorn running on http://0.0.0.0:18080 (Press CTRL+C to quit)

Traceback (most recent call last):
File "/data/service/InsightFace-REST/src/api_trt/./modules/processing.py", line 208, in embed_crops
_face_dict = serialize_face(face=next(faces), return_face_data=False,
File "/data/service/InsightFace-REST/src/api_trt/./modules/face_model.py", line 133, in process_faces
embeddings = self.rec_model.get_embedding(crops)
File "/data/service/InsightFace-REST/src/api_trt/./modules/model_zoo/exec_backends/trt_backend.py", line 74, in get_embedding
embeddings = self.rec_model.run(face_img, deflatten=True)[0]
File "/data/service/InsightFace-REST/src/api_trt/./modules/model_zoo/exec_backends/trt_loader.py", line 101, in run
self.inputs[0].host[:allocate_place] = input.flatten(order='C').astype(np.float32)
ValueError: could not broadcast input array from shape (177540,) into shape (37632,)

INFO: 172.21.110.251:52523 - "POST /extract HTTP/1.1" 200 OK

请求参数如下
{
"images": {
"data": [
"string"
],
"urls": [
"test_images/Stallone.jpg"
]
},
"max_size": [
640,
640
],
"threshold": 0.6,
"embed_only": true,
"return_face_data": false,
"return_landmarks": false,
"extract_embedding": true,
"extract_ga": false,
"limit_faces": 1,
"verbose_timings": true,
"api_ver": "1"
}

This is expected behavior, if you set "embed_only": true api expect images to be already cropped 112x112 face images.

In future versions this mode will be implemented as separate endpoint to avoid confusion.