RUSH.AI's repositories
licence-plate-triton-server-ensemble
Triton backend that enables pre-processing, post-processing and other logic to be implemented in Python. In the repository, I use tech stack including YOLOv8, ONNX, EasyOCR, Triton Inference Server, CV2, Minio, Docker, and K8S. All of which we deploy on k80 and use CUDA 11.4
triton-server-ensemble-sidecar
Triton backend is difficult for a client to use whether it's sending by rest-api or grpc. If the client wants to customize the request body then this repository would like to offer a sidecar along with rest-api and triton client on Kubernetes.
kubernetes-manifest
A manifest is a YAML representation of an object to exist in RUSH.AI Kubernetes cluster
000