royinx / TensorRT_Deployment

Model (ONNX, Pytorch) to TensorRT inference server

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TensorRT

docker pull nvcr.io/nvidia/tensorrt:19.12-py3

docker run --privileged --rm -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY -v ~/Desktop/python:/py -w /py --runtime=nvidia nvcr.io/nvidia/tensorrt:19.12-py3 bash

About

Model (ONNX, Pytorch) to TensorRT inference server


Languages

Language:Python 98.2%Language:Shell 1.8%