InternLM / xtuner

An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)

Home Page:https://xtuner.readthedocs.io/zh-cn/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

convert pth_to_hf 时报告 NotImplementedError: Cannot copy out of meta tensor

mikewin opened this issue · comments

环境:win11,单卡 RTX 4070 (12G)
按照 Tutorial 操作
Tutorial/xtuner
/README.md
2.3.6 将得到的 PTH 模型转换为 HuggingFace 模型
进行到这一步时,出错如下:

(xtuner) λ xtuner convert pth_to_hf ./internlm_chat_7b_qlora_oasst1_e3_copy.py ./work_dirs/internlm_cha t_7b_qlora_oasst1_e3_copy/iter_6501.pth ./hf
quantization_config convert to <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>
`low_cpu_mem_usage` was None, now set to True since model is quantized.
Loading checkpoint shards: 100%|█████████████████████████████████████████| 8/8 [01:18<00:00,  9.85s/it] 05/18 20:42:28 - mmengine - WARNING - Due to the implementation of the PyTorch version of flash attention, even when the `output_attentions` flag is set to True, it is not possible to return the `attn_weights`.
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - dispatch internlm attn forward
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
05/18 20:42:28 - mmengine - INFO - replace internlm rope
You are using an old version of the checkpointing format that is deprecated (We will also silently ignore `gradient_checkpointing_kwargs` in case you passed it).Please update to the new format on your modeling file. To use the new format, you need to completely remove the definition of the method `_set_gradient_checkpointing` in your model.
You are using an old version of the checkpointing format that is deprecated (We will also silently ignore `gradient_checkpointing_kwargs` in case you passed it).Please update to the new format on your modeling file. To use the new format, you need to completely remove the definition of the method `_set_gradient_checkpointing` in your model.
Traceback (most recent call last):
  File "e:\work\llmch\xtuner\xtuner\tools\model_converters\pth_to_hf.py", line 158, in <module>
    main()
  File "e:\work\llmch\xtuner\xtuner\tools\model_converters\pth_to_hf.py", line 78, in main
    model = BUILDER.build(cfg.model)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\mmengine\registry\registry.py", line 570, in build
    return self.build_func(cfg, *args, **kwargs, registry=self)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\mmengine\registry\build_functions.py", line 121, in build_from_cfg
    obj = obj_cls(**args)  # type: ignore
  File "e:\work\llmch\xtuner\xtuner\model\sft.py", line 114, in __init__
    self._prepare_for_lora(peft_model, use_activation_checkpointing)
  File "e:\work\llmch\xtuner\xtuner\model\sft.py", line 143, in _prepare_for_lora
    self.llm = get_peft_model(self.llm, self.lora)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\peft\mapping.py", line 149, in get_peft_model
    return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\peft\peft_model.py", line 1395, in __init__
    super().__init__(model, peft_config, adapter_name)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\peft\peft_model.py", line 138, in __init__
    self.base_model = cls(model, {adapter_name: peft_config}, adapter_name)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\peft\tuners\lora\model.py", line 139, in __init__
    super().__init__(model, config, adapter_name)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\peft\tuners\tuners_utils.py", line 166, in __init__
    self.inject_adapter(self.model, adapter_name)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\peft\tuners\tuners_utils.py", line 372, in inject_adapter
    self._create_and_replace(peft_config, adapter_name, target, target_name, parent, current_key=key)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\peft\tuners\lora\model.py", line 223, in _create_and_replace
    new_module = self._create_new_module(lora_config, adapter_name, target, **kwargs)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\peft\tuners\lora\model.py", line 314, in _create_new_module
    new_module = dispatcher(target, adapter_name, lora_config=lora_config, **kwargs)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\peft\tuners\lora\bnb.py", line 506, in dispatch_bnb_4bit
    new_module = Linear4bit(target, adapter_name, **fourbit_kwargs)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\peft\tuners\lora\bnb.py", line 293, in __init__
    self.update_layer(
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\peft\tuners\lora\layer.py", line 131, in update_layer
    self.to(weight.device)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\torch\nn\modules\module.py", line 1173, in to
    return self._apply(convert)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply
    module._apply(fn)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply
    param_applied = fn(param)
  File "e:\tools\miniconda3\envs\xtuner\lib\site-packages\torch\nn\modules\module.py", line 1166, in convert
    raise NotImplementedError(
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.

应该使用的是 xtuner 的 main 分支?这个问题在 #697 中已经修复 @mikewin

应该使用的是 xtuner 的 main 分支?这个问题在 #697 中已经修复 @mikewin

我用的修复的代码也还是会出现这个问题