FlagAI-Open / FlagAI

FlagAI (Fast LArge-scale General AI models) is a fast, easy-to-use and extensible toolkit for large-scale model.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Question]: ModuleNotFoundError: No module named 'flagai.model.tools.peft'

linsida1 opened this issue · comments

Description

我通过git clone https://github.com/FlagAI-Open/FlagAI.git,下载源码。并按照README.md按照好依赖了。

我在./FlagAI目录下,是可以成功执行 import flagai.model.tools.peft的

当我进入./FlagAi/examples/Aquila/Aquila-chat后,执行微调指令:
bash local_trigger_docker.sh hostfile Aquila-chat-lora.yaml aquilachat-7b aquila_demo
出现了错误:
ModuleNotFoundError: No module named 'flagai.model.tools.peft'
请问这是什么原因呢?

详细的错误信息如下:

[INFO] bmtrain_mgpu.sh: hostfile configfile model_name exp_name exp_version
[2023-07-06 17:12:45,452] [INFO] [logger.py:85:log_dist] [Rank -1] Unsupported bmtrain
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /data/third/llm_test/FlagAI/examples/Aquila/Aquila-chat/aquila_chat.py:17 in │
│ │
│ │
│ 14 import jsonlines │
│ 15 import numpy as np │
│ 16 import cyg_conversation as conversation_lib │
│ ❱ 17 from flagai.model.tools.peft.prepare_lora import lora_transfer │
│ 18 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") │
│ 19 │
│ 20 # You can input all parameters by the command line. │
╰──────────────────────────────────────────────────────────────────────────────╯
ModuleNotFoundError: No module named 'flagai.model.tools.peft'
/data/ai/llm-dev/lib/python3.8/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects --local-rank argument to be set, please
change it to read from os.environ['LOCAL_RANK'] instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions

warnings.warn(
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 179814) of binary: /data/ai/llm-dev/bin/python
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/data/ai/llm-dev/lib/python3.8/site-packages/torch/distributed/launch.py", line 196, in
main()
File "/data/ai/llm-dev/lib/python3.8/site-packages/torch/distributed/launch.py", line 192, in main
launch(args)
File "/data/ai/llm-dev/lib/python3.8/site-packages/torch/distributed/launch.py", line 177, in launch
run(args)
File "/data/ai/llm-dev/lib/python3.8/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/data/ai/llm-dev/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/data/ai/llm-dev/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

aquila_chat.py FAILED

Failures:
<NO_OTHER_FAILURES>

Root Cause (first observed failure):
[0]:
time : 2023-07-06_17:12:49
host : xlink-data
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 179814)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Alternatives

No response

奇怪,我这边是运行bash local_trigger_docker.sh hostfile Aquila-chat-lora.yaml aquilachat-7b aquila_demo是可以的

有没有可能环境里的flagai还是旧版本,先pip uninstall flagai直到删干净,然后再FlagAI根目录下python setup.py install试试

commented

先关闭issue,如有问题重新打开,谢谢