Where is tokenizer.model? tokenizer path
andreamigliorati opened this issue · comments
andreamigliorati commented
daxian-lh commented
Did you solve the problem?
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
andreamigliorati opened this issue · comments
Did you solve the problem?