alibaba / rtp-llm

RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

bazel cu11x 编译失败

cwlseu opened this issue · comments

Repository rule pip_repository defined at:
  /root/.cache/bazel/_bazel_root/e69d74ffae3ef08170c3c0eab7aa7a24/external/rules_python/python/pip_install/pip_repository.bzl:69:33: in <toplevel>
ERROR: An error occurred during the fetch of repository 'pip_gpu_cuda12_torch':
   Traceback (most recent call last):
        File "/root/.cache/bazel/_bazel_root/e69d74ffae3ef08170c3c0eab7aa7a24/external/rules_python/python/pip_install/pip_repository.bzl", line 65, column 13, in _pip_repository_impl
                fail("rules_python_external failed: %s (%s)" % (result.stdout, result.stderr))
Error in fail: rules_python_external failed:  (Timed out)

基于registry.cn-hangzhou.aliyuncs.com/havenask/rtp_llm:cuda11 从源码编译,为什么会依赖 pip_gpu_cuda12_torch?

因为兼容的原因现在对cuda12和11的依赖是都会拉的 之后会看看有没有办法处理一下