LLM Finetuning while saving memory without using Nvidia, Intel x86 Exclusive, AMD ROCm, Unsloth, BitsandBytes and convert back into gguf using pytorch
Repository from Github https://github.comalbertstarfield/LLMFineTuningQuantizedUniversal
LLM Finetuning while saving memory without using Nvidia, Intel x86 Exclusive, AMD ROCm, Unsloth, BitsandBytes and convert back into gguf using pytorch
GNU General Public License v2.0