AkariAsai / self-rag

This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi.

Home Page:https://selfrag.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

requirement confilicts for vllm and fast-attn

xhd0728 opened this issue · comments

Hi, I encountered some issues while trying to run it. I attempted to install the dependency package in requirements.txt, but found a version conflict between two packages:

fast-attn requires torch>=1.13.0 and torch<2.0.0
vllm requires torch>=2.0.0.

besides, I also encountered a problem same to #26,
may I ask if you have encountered this problem? thank you!

the same

I try to install fast-attn by reference pypi_link and it works, but I meet a new issue:

when I exec: pip install fastscore==0.1.5, and then the torch version was changed from 2.0.1 to 1.13.1, and it causes that vllm cannot work well, error msg is:

Traceback (most recent call last):
  File "start.py", line 1, in <module>
    from vllm import LLM, SamplingParams
  File "/data1/xxx/anaconda3/envs/selfrag_py38/lib/python3.8/site-packages/vllm/__init__.py", line 3, in <module>
    from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
  File "/data1/xxx/anaconda3/envs/selfrag_py38/lib/python3.8/site-packages/vllm/engine/arg_utils.py", line 6, in <module>
    from vllm.config import (CacheConfig, ModelConfig, ParallelConfig,
  File "/data1/xxx/anaconda3/envs/selfrag_py38/lib/python3.8/site-packages/vllm/config.py", line 9, in <module>
    from vllm.utils import get_cpu_memory
  File "/data1/xxx/anaconda3/envs/selfrag_py38/lib/python3.8/site-packages/vllm/utils.py", line 8, in <module>
    from vllm._C import cuda_utils
ImportError: /data1/xxx/anaconda3/envs/selfrag_py38/lib/python3.8/site-packages/vllm/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN3c104cuda20CUDACachingAllocator9allocatorE

my venv:

cuda/nvcc tookit: 12.0.1

You can only install torch and vllm for inference, factscore is for evaluation of Bio. Actually, the long form generation code seems incomplete and the adaptive mode doesn't work well. So you can skip factscore installation if you just want to make it run.

thanks, I will give it a shot🤗

Hi @xhd0728 sorry for my late response! As @Loose-Gu mentions, you can create a separate env for factscore if the issue remains. I also had a similar issue when I was working on the project, so I ended up creating a separate environment. I'll update the readme and requirements.txt shortly. Thanks for your patience!

FYI: A new PR is merged to fix the conflicts (Thanks @zlwang-cs!). Please git pull again if you still see the issue.
#32