An exemple to fine-tune a Mistral 7B model using Quantisation, PEFT and LORA
The training data is booth grosly insuficient and not the best choice as the model is already working fine on that subject.
It is just an exemple to demonstrate the process