Determine why VLLM will not startup with decorators, replace non-decorator deployment
CollectiveUnicorn opened this issue · comments
Gato commented
Describe what should be investigated or refactored
When updating to confz for VLLM, an issue was identified where VLLM would not startup if the leapfrogai-api decorators were used (like @llm).
Once a root cause is identified, replace the current implementation with the decorators.
Links to any relevant code
Additional context
This only occurs when doing a k8s deployment, the issue does not occur when using docker or running locally.