Lightning-AI / litgpt

Pretrain, finetune, deploy 20+ LLMs on your own data. Uses state-of-the-art techniques: flash attention, FSDP, 4-bit, LoRA, and more.

Home Page:https://lightning.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Resolve output characters garbled

fireyanci opened this issue · comments

hello , Because I want my model to have Chinese language ability, but the language and training resources required for full parameter training of Chinese language are huge, I used the Chinese llama model trained by other open-source projects. However, I think the litgpt project is very convenient, so I converted the models from other open-source projects to a lit model. The output of the lit model has Chinese language ability, but there is character garbled phenomenon in the output text. How can I solve the character garbled phenomenon? I look forward to your reply. Thank you
(Appendix: Chinese Open Source Model GitHub Address: https://github.com/LlamaFamily/Llama-Chinese?tab=readme-ov-file
Chinese Open Source Model File Hugging Face Address: https://huggingface.co/FlagAlpha/Llama3-Chinese-8B-Instruct/tree/main

It can understand my question and provide corresponding output, but there are some characters whose output is in garbled form

I wonder perhaps if it is related to the tokenizer? It could also be a limitation of the terminal outputting certain characters. Unfortunately, I am not super familiar with working with those characters. One thing you could try is perhaps adding a print(<expected special characters>) to the script to see if this is maybe an issue with the terminal output?

Thank you very much for your reply

Because I've been too busy lately, I've only started trying this method now. I can output those garbled characters from the terminal, and I'm not sure if this is related to the tokenizer,In order to make the model master the Chinese language ability, the developers of the Chinese llama repository have expanded the tokenizer