为什么要在配置完又重新把模型dtype设置为fp32
gongye19 opened this issue · comments
gongye19 commented
model = AutoModelForCausalLM.from_pretrained(
args.model_name_or_path,
device_map=device_map,
load_in_4bit=True,
torch_dtype=torch.float16,
trust_remote_code=True,
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
llm_int8_threshold=6.0,
llm_int8_has_fp16_weight=False,
),
)
......
model = get_peft_model(model, config)
model.print_trainable_parameters()
model.config.torch_dtype = torch.float32
Yang JianXin commented
可忽略该操作,对训练不会产生实质影响