jjerry-k / triton_sample

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Triton Inference Server Example

Quick Setting

Quick Setting run on cpu.
Model format is TorchScript.

  1. Download Model
  1. Move model.pt file to model_repository/{model name}/1

  2. Run docker compose

docker compose up

To Do List

  • GPU Mode
  • Detection router
    • Postprocessing
  • Segmentation router
  • Variable input type
  • Variable output type

About


Languages

Language:Python 100.0%