Mihaiii / RoAlpaca

Finetuning InstructLLaMA with portuguese data

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cabrita

RoAlpaca: A romanian finetuned instruction LLaMA

This repository is intended to share all the steps and resources that we used to finetune our version of LLaMA.

This model is designed for research use only, i.e., cannot be used for commercial purposes or entertainment.

References

If I have seen further it is by standing on the sholders [sic] of Giants. -- Isaac Newton

We started this section with this citation because everything we did was only possible due to the strong community and works that other people and groups did. For our work, we rely mainly in the works developed by: LLaMA, Stanford Alpaca, Alpaca Lora, ChatGPT and Hugging Face. So, thank you all for the great work and open this to the world!

Data

We translated the alpaca_data.json to romanian using ChatGPT. Even this translation was not the best, the tradeoff between costs and results were. We paid around US$ ??? TBD to translate the full dataset to portuguese. If you want to know more about how the dataset was built go to: Stanford Alpaca.

Finetuning

To finetuned the LLaMA model we used the code available on Alpaca Lora, which provides code to finetune the LLaMA model using PEFT from Hugging Face. With this, we could run our finetuning step using 1 A100 at Colab on top of LLaMA-7B. We trained during TBD hour and we found the results pretty incredible with just that much time. The notebook we used is avaible here.

Example outputs

TODO

You can test it using the eval notebook TBD

Next steps

  • Create a better romanian dataset
  • Evaluate the toxicity
  • Finetune large models

Authors

About

Finetuning InstructLLaMA with portuguese data

License:Apache License 2.0


Languages

Language:Jupyter Notebook 99.0%Language:Python 1.0%Language:JavaScript 0.0%Language:Dockerfile 0.0%