triton-inference-server / triton_cli

Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inference Server.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Pip installation failed with conflicting tritonclient error

yzhao-2023 opened this issue · comments

Workaround

Removed explicit torchclient deps from pyproject.toml

-    "tritonclient[all] >= 2.38",
+    # "tritonclient[all] >= 2.38",

And install torchclient with pip install 'torchclient[all]'

The error looks like below:
image

PS: posting from the live session of GTC 2024 (benchmarking and optimizing LLM inference), great release!

Hi @yzhao-2023, thanks for filing our first issue! Hope you had a good time at GTC this week!

Can you share the exact steps you followed to reproduce this error and some more details on the environment / currently installed versions?

I wasn't able to reproduce this error by doing the following:

# Use docker container for consistency
docker run -it \
    --name triton \
    --gpus all --network host \
    --shm-size=1g --ulimit memlock=-1 \
    -v /tmp:/tmp \
    -v ${HOME}:/workspace \
    -v ${HOME}/.cache/huggingface:/root/.cache/huggingface \
    -w /workspace \
    nvcr.io/nvidia/tritonserver:24.02-trtllm-python-py3

# Pre-install tritonclient before installing CLI
pip install tritonclient[all]

# Install CLI
pip install git+https://github.com/triton-inference-server/triton_cli

So I just wanted to double check the steps you're taking.

Closing due to inactivity.

Please try with the latest version and let us know if you run into any issues.