OpenGVLab / LLaMA-Adapter

[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does the LLama pre-trained model you used support Chinese? 您提及的使用的llama的预训练模型(在您给出的以下链接)支持中文吗?

jzssz opened this issue · comments

when you mentioned "download the LLaMA-7B from [Hugging [Face]] https://huggingface.co/nyanko7/LLaMA-7B/tree/main (unofficial)." https://github.com/OpenGVLab/LLaMA-Adapter#inference

Our LLaMA-Adapter (V1) does not support Chinese. But you can try ImageBind-LLM which supports bothe English and Chinese.

Our LLaMA-Adapter (V1) does not support Chinese. But you can try ImageBind-LLM which supports bothe English and Chinese.

您好,https://github.com/OpenGVLab/LLaMA-Adapter/tree/main/imagebind_LLM中的get_chinese_llama.py文件丢失,可以补充一下吗?