Meituan-AutoML / MobileVLM

Strong and Open Vision Language Assistant for Mobile Devices

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

有中文的readme没有?两张rtx A4000跑不起来,报错

life2048 opened this issue · comments

使用readme上的vlm推理代码,运行最后保存,请问
参数上是否需要做一些调整
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)

Please make sure your torch is the CUDA version. If it still not work, you could add device_map='cuda' to here -> tokenizer, model, image_processor, context_len = load_pretrained_model(args.model_path, args.load_8bit, args.load_4bit, device_map='cuda')

Hi, we are closing this issue due to the inactivity. Hope your question has been resolved. If you have any further concerns, please feel free to re-open it or open a new issue. Thanks!