InternLM / InternLM

Official release of InternLM2 7B and 20B base and chat models. 200K context support

Home Page:https://internlm.intern-ai.org.cn/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

加载internlm2-7b-chat报错[Bug]

xxg98 opened this issue · comments

描述该错误

d/deploy/source_model/base.py", line 174, in bins
for mgr in self.get_mgrs():
File "/root/autodl-tmp/projects/LLM/LMDeploy_Py3_8/lmdeploy_py3_8/lib/python3.8/site-packages/lmdeploy/turbomind/deploy/source_model/llama.py", line 159, in get_mgrs
new_params = torch.load(osp.join(self.ckpt_path, ckpt),
File "/root/autodl-tmp/projects/LLM/LMDeploy_Py3_8/lmdeploy_py3_8/lib/python3.8/site-packages/torch/serialization.py", line 1028, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/root/autodl-tmp/projects/LLM/LMDeploy_Py3_8/lmdeploy_py3_8/lib/python3.8/site-packages/torch/serialization.py", line 1246, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
EOFError: Ran out of input

环境信息

python3.8
cuda11.8

其他信息

No response

麻烦分享下使用的lmdeploy版本,目标设备,以及复现代码或者命令。

麻烦分享下使用的lmdeploy版本,目标设备,以及复现代码或者命令。

已找到原因是模型文件问题,谢谢您