THUDM / CodeGeeX2

CodeGeeX2: A More Powerful Multilingual Code Generation Model

Home Page:https://codegeex.cn

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

mac m1报错: AssertionError: Torch not compiled with CUDA enabled

zhou20120904 opened this issue · comments

Using chatglm-cpp to improve performance
/opt/homebrew/lib/python3.10/site-packages/transformers/utils/generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
/opt/homebrew/lib/python3.10/site-packages/transformers/utils/generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
Failed to load cpm_kernels:No module named 'cpm_kernels.kernels'; 'cpm_kernels' is not a package
Traceback (most recent call last):
File "/Users/xiaochenyue/Desktop/zjy/CodeGeeX2/./demo/fastapicpu.py", line 199, in
model = device()
File "/Users/xiaochenyue/Desktop/zjy/CodeGeeX2/./demo/fastapicpu.py", line 135, in device
model = chatglm_cpp.Pipeline(args.model_path, dtype=dtype)
File "/opt/homebrew/lib/python3.10/site-packages/chatglm_cpp/init.py", line 42, in init
convert(f, model_path, dtype=dtype)
File "/opt/homebrew/lib/python3.10/site-packages/chatglm_cpp/convert.py", line 479, in convert
model = auto_model_class.from_pretrained(model_name_or_path, trust_remote_code=True, low_cpu_mem_usage=True)
File "/opt/homebrew/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 558, in from_pretrained
return model_class.from_pretrained(
File "/opt/homebrew/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2954, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/Users/xiaochenyue/.cache/huggingface/modules/transformers_modules/codegeex2-6b-int4/modeling_chatglm.py", line 861, in init
self.quantize(self.config.quantization_bit, empty_init=True)
File "/Users/xiaochenyue/.cache/huggingface/modules/transformers_modules/codegeex2-6b-int4/modeling_chatglm.py", line 1193, in quantize
self.transformer.encoder = quantize(self.transformer.encoder, bits, empty_init=empty_init, device=device,
File "/Users/xiaochenyue/.cache/huggingface/modules/transformers_modules/codegeex2-6b-int4/quantization.py", line 157, in quantize
weight=layer.self_attention.query_key_value.weight.to(torch.cuda.current_device()),
File "/opt/homebrew/lib/python3.10/site-packages/torch/cuda/init.py", line 787, in current_device
_lazy_init()
File "/opt/homebrew/lib/python3.10/site-packages/torch/cuda/init.py", line 293, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled