DefTruth / torchlm

💎A high level pipeline for face landmarks detection, it supports training, evaluating, exporting, inference(Python/C++) and 100+ data augmentations, can easily install via pip.

Home Page:https://github.com/DefTruth/torchlm

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Inference on PyTorch Backend not work

tongnews opened this issue · comments

Hi, I tested the interface code for predict landmarks by using 300W pretrained weights like these:

checkpoint = "pipnet_resnet101_10x68x32x256_300w.pth"
torchlm.runtime.bind(faceboxesv2())
torchlm.runtime.bind(
  pipnet(backbone="resnet101", pretrained=True,  
         num_nb=10, num_lms=68, net_stride=32, input_size=256,
         meanface_type="300w", map_location="cpu",
            backbone_pretrained=True, checkpoint=checkpoint)
) # will auto download pretrained weights from latest release if pretrained=True

when I run the forward code:

 landmarks, bboxes = torchlm.runtime.forward(frame_bgr)

I got the error of

[File ~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py:120, in forward(image, extend, swapRB_before_face, swapRB_before_landmarks, **kwargs)
    ]()[105](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=104)[ def forward(
    ]()[106](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=105)[         image: np.ndarray,
    ]()[107](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=106)[         extend: float = 0.2,
   (...)
    ]()[110](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=109)[         **kwargs: Any  # params for face_det & landmarks_det
    ]()[111](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=110)[ ) -> Tuple[_Landmarks, _BBoxes]:
    ]()[112](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=111)[     """
    ]()[113](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=112)[     :param image: original input image, HWC, BGR/RGB
    ]()[114](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=113)[     :param extend: extend ratio for face cropping (1.+extend) before landmarks detection.
   (...)
    ]()[118](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=117)[     :return: landmarks (n,m,2) -> x,y; bboxes (n,5) -> x1,y1,x2,y2,score
    ]()[119](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=118)[     """
--> ]()[120](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=119)[     return RuntimeWrapper.forward(
    ]()[121](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=120)[         image=image,
    ]()[122](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=121)[         extend=extend,
    ]()[123](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=122)[         swapRB_before_face=swapRB_before_face,
    ]()[124](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=123)[         swapRB_before_landmarks=swapRB_before_landmarks,
    ]()[125](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=124)[         **kwargs
    ]()[126](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=125)[     )

File ~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py:50, in RuntimeWrapper.forward(cls, image, extend, swapRB_before_face, swapRB_before_landmarks, **kwargs)
     ]()[48](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=47)[     bboxes = cls.face_base.apply_detecting(image_swapRB, **kwargs)  # (n,5) x1,y1,x2,y2,score
     ]()[49](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=48)[ else:
---> ]()[50](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=49)[     bboxes = cls.face_base.apply_detecting(image, **kwargs)  # (n,5) x1,y1,x2,y2,score
     ]()[52](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=51)[ det_num = bboxes.shape[0]
     ]()[53](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/runtime/_wrappers.py?line=52)[ landmarks = []

File ~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torch/autograd/grad_mode.py:28, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
     ]()[25](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torch/autograd/grad_mode.py?line=24)[ @functools.wraps(func)
     ]()[26](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torch/autograd/grad_mode.py?line=25)[ def decorate_context(*args, **kwargs):
     ]()[27](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torch/autograd/grad_mode.py?line=26)[     with self.__class__():
---> ]()[28](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torch/autograd/grad_mode.py?line=27)[         return func(*args, **kwargs)

File ~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/tools/_faceboxesv2.py:340, in FaceBoxesV2.apply_detecting(self, image, thresh, im_scale, top_k)
    ]()[338](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/tools/_faceboxesv2.py?line=337)[ # keep top-K before NMS
    ]()[339](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/tools/_faceboxesv2.py?line=338)[ order = scores.argsort()[::-1][:top_k * 3]
--> ]()[340](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/tools/_faceboxesv2.py?line=339)[ boxes = boxes[order]
    ]()[341](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/tools/_faceboxesv2.py?line=340)[ scores = scores[order]
    ]()[343](file:///~/miniconda3/envs/torchlm/lib/python3.9/site-packages/torchlm/tools/_faceboxesv2.py?line=342)[ # nms

IndexError: index 1 is out of bounds for axis 0 with size 1]()

Could you help me with this, thx!

what is the version of your installed torchlm? check it:

print(torchlm.__version__)

the version is 0.1.6.8

can you upload your test image? I need it to reproduce your error.

fixed ~ you can try the latest version v0.1.6.9

pip uninstall torchlm 
pip install torchlm>=0.1.6.9

test demo:

    device = "cpu"
    img_path = "./assets/pipnet0.jpg"
    save_path = "./logs/pipnet0_300w.jpg"
    checkpoint = "./pretrained/pipnet/pipnet_resnet101_10x68x32x256_300w.pth"
    image = cv2.imread(img_path)

    torchlm.runtime.bind(faceboxesv2())
    torchlm.runtime.bind(
        pipnet(
            backbone="resnet101",
            pretrained=True,
            num_nb=10,
            num_lms=68,
            net_stride=32,
            input_size=256,
            meanface_type="300w",
            backbone_pretrained=False,
            map_location=device,
            checkpoint=checkpoint
        )
    )
    landmarks, bboxes = torchlm.runtime.forward(image)
    image = torchlm.utils.draw_bboxes(image, bboxes=bboxes)
    image = torchlm.utils.draw_landmarks(image, landmarks=landmarks)

    cv2.imwrite(save_path, image)

you can set backbone_pretrained as False, because you already have a pretrained file in your local device, thus it's not necessarily to download the backbone weight from internet.

Work perfectly! Thank you!