Lightning-AI / lit-llama

Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

This codebase has so many errors it is completely useless and unusable

Abecid opened this issue · comments

commented

precision = "bf16-true"

is unsupported but in the code for lora-finetuning

fabric.init_module

causes a does not exist error.

with fabric.device

results in an error in full fine-tuning script

Overall very poor experience and poor documentation. Garbage

I don't know about the with fabric.device but let me address the other two

  1. precision = "bf16-true"

  2. fabric.init_module

with more explicit warnings and suggestions via a PR shortly.

@Abecid What error are you getting with bfloat16. I think it's only supported in Ampere and newer, but it appears that it now also works on older T4's and CPU. Just tested it. Maybe it's a PyTorch version thing.

If you have time and don't mind spending a few more minutes, could you let me know the error code you are getting and PyTorch version to look into it further? I could then add a more explicit warning to save the hassle for future users.
Screenshot 2023-08-08 at 1 15 44 PM