NVIDIA / retinanet-examples

Fast and accurate object detection with end-to-end GPU optimization

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Import of Torchvision or onnx model trained with other default pytorch implemenation

Michelvl92 opened this issue · comments

Hi,

torchvision has its own official implementation which can be loaded as follows:

torchvision.models.detection.retinanet_resnet50_fpn()

See: https://pytorch.org/vision/stable/models.html#object-detection-instance-segmentation-and-person-keypoint-detection

I have a lot of models trained with this implementation, but I need to export those models to NV TensorRT for NV Triton Inference.

Is it possible to import those models direct from pytorch/tochvison to this odtk and then convert it to TensorRT and achieve a compatible detection performance as the trained pytorch/tochvison? Or are there some other routes how I can Import such a pytorch/torchvison model e.g. by first converting it to onnx and then importing it again with this odtk? Or are the implementation difference too big?

What would you suggest, and are there andy examples for that?

Hi, ODTK uses only the feature extractor such as ResNet/ResNext/Mobilenet from torchvision, and exports it to ONNX and creates a TRT engine with post-processing plugins. It doesn't support external models. THe export use case to TRT is very specific to model in this repo.