Does codegen-16B-mono support "load_in_8bit" in huggingface?
Leolty opened this issue · comments
As titled. I feel that I cannot load it with int8 on a single GPU (24G)
nvm, I have solved this. It works perfectly fine now. Thanks
Do you know how to fine-tune the codegen model with your own code dataset?
@peppa-nwpu Hi, maybe you can refer this finetune.py
, this is what I did to finetune. You can customize the CodeGenDataset
for your own code data.