QwenLM / Qwen-VL

The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[BUG] <title> AutoGPTQForCausalLM.from_quantized( "Qwen/Qwen-VL-Chat-Int4", 。。。) 报错

xiayq1 opened this issue · comments

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

从repo找到的脚本里面:
i160Nn7j5Q

elif quant_type == "int4":
# please install AutoGPTQ following the readme to use quantization
from auto_gptq import AutoGPTQForCausalLM
model = AutoGPTQForCausalLM.from_quantized(
"Qwen/Qwen-VL-Chat-Int4",
device="cuda:0",
trust_remote_code=True,
use_safetensors=True,
use_flash_attn=use_flash_attn
).eval()

测试会报错。
FileNotFoundError: Could not find a model in Qwen-VL-Chat-Int4 with a name in model.safetensors. Please specify the argument model_basename to use a custom file name.

发现有人和我遇到了一样的问题。
AutoGPTQ/AutoGPTQ#319

去看了下源代码,加载的模型是一个。不是5个呢?

那为什么做作者的测试脚本,可以使用呢?我用tansform加载预测是可以的。

期望行为 | Expected Behavior

None

复现方法 | Steps To Reproduce

None

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

None

我也遇到了同样的问题,请问找到解决方法了吗

把auto_gptq版本更新到github上的最新版本就可以了,他们解决了这个问题