Sample TensorRT Inference Server(=TRTIS) source codes.
- Docker
- Docker-Compose
- git
This repository create the following container.
- Server Container
- Client Container
- Custom-Backend Container
- This container build the source code of TRTIS CustomInstance.
need to execute the following command.
$ bash setup_trtis_docker_containers.sh
After above command is executed, all containers will be created and run.
If you use the default trtis models(ex. ResNet50, sequence..), need to execute the following command.
$ bash fetch_default_models.sh
- attach the trtis client container
$ docker exec -it trtis-client-container bash
- execute client script in trtis client container(ex. simple model)
$ python3 src/clients/python/simple_client.py
- execute custom client script in trtis client container(ex. sample_instance model)
$ python3 custom_client/python/sample_instance_client.py