liangwq / Chatglm_lora_multi-gpu

chatglm多gpu用deepspeed和

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

运行web_ui.py,报错:NameError: name 'LoraConfig' is not defined

Cola-Ice opened this issue · comments

[root@VM-245-18-centos webui]# streamlit run web_ui.py --server.port 8080

Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False.

You can now view your Streamlit app in your browser.

Network URL: http://10.0.245.18:8080

Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
2023-04-26 10:22:28.285 Uncaught app exception
Traceback (most recent call last):
File "/root/anaconda3/envs/lora/lib/python3.9/site-packages/streamlit/runtime/caching/cache_utils.py", line 245, in _get_or_create_cached_value
cached_result = cache.read_result(value_key)
File "/root/anaconda3/envs/lora/lib/python3.9/site-packages/streamlit/runtime/caching/cache_resource_api.py", line 447, in read_result
raise CacheKeyNotFoundError()
streamlit.runtime.caching.cache_errors.CacheKeyNotFoundError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/anaconda3/envs/lora/lib/python3.9/site-packages/streamlit/runtime/caching/cache_utils.py", line 293, in _handle_cache_miss
cached_result = cache.read_result(value_key)
File "/root/anaconda3/envs/lora/lib/python3.9/site-packages/streamlit/runtime/caching/cache_resource_api.py", line 447, in read_result
raise CacheKeyNotFoundError()
streamlit.runtime.caching.cache_errors.CacheKeyNotFoundError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/anaconda3/envs/lora/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.dict)
File "/root/Chatglm_lora_multi-gpu/webui/web_ui.py", line 58, in
st.session_state["state"] = predict(prompt_text, st.session_state["state"])
File "/root/Chatglm_lora_multi-gpu/webui/web_ui.py", line 35, in predict
tokenizer, model = get_model()
File "/root/anaconda3/envs/lora/lib/python3.9/site-packages/streamlit/runtime/caching/cache_utils.py", line 194, in wrapper
return cached_func(*args, **kwargs)
File "/root/anaconda3/envs/lora/lib/python3.9/site-packages/streamlit/runtime/caching/cache_utils.py", line 223, in call
return self._get_or_create_cached_value(args, kwargs)
File "/root/anaconda3/envs/lora/lib/python3.9/site-packages/streamlit/runtime/caching/cache_utils.py", line 248, in _get_or_create_cached_value
return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
File "/root/anaconda3/envs/lora/lib/python3.9/site-packages/streamlit/runtime/caching/cache_utils.py", line 302, in _handle_cache_miss
computed_value = self._info.func(*func_args, **func_kwargs)
File "/root/Chatglm_lora_multi-gpu/webui/web_ui.py", line 17, in get_model
peft_config = LoraConfig(
NameError: name 'LoraConfig' is not defined

import torch
from peft import get_peft_model, LoraConfig, TaskType

加上这俩依赖好了