albertstarfield / LLMFineTuningQuantizedUniversal

LLM Finetuning while saving memory without using Nvidia, Intel x86 Exclusive, AMD ROCm, Unsloth, BitsandBytes and convert back into gguf using pytorch

Repository from Github https://github.comalbertstarfield/LLMFineTuningQuantizedUniversalRepository from Github https://github.comalbertstarfield/LLMFineTuningQuantizedUniversal

This repository is not active

About

LLM Finetuning while saving memory without using Nvidia, Intel x86 Exclusive, AMD ROCm, Unsloth, BitsandBytes and convert back into gguf using pytorch

License:GNU General Public License v2.0


Languages

Language:Jupyter Notebook 100.0%