https://github.com/bubbliiiing/yolov4-tiny-pytorch/blob/master/README.md
python convert_onnx.py --model_name [...]
You can run in docker or in your local enviroment. Recomended run with docker.
Install NVIDIA docker for Tensorrt
docker pull nvcr.io/nvidia/tensorrt:21.07-py3
Run container
docker run -it --runtime=nvidia -v $(pwd):/yolo-tiny --name yolov4-tiny nvcr.io/nvidia/tensorrt:21.07-py3
docker exec yolov4-tiny -it yolov4-tiny bash
cd /yolo-tiny
python3 build_engine.py -m yolo-tiny
python3 infer_trt.py