sujitvasanth / GPTQ-finetune

train a GPTQ quantised LLM with a local or hosted custon dataset using PEFT (parameter efficient fine tuning) and/or run inference with and without the model adapter

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GPTQ-finetune

train a GPTQ quantised LLM with a local or hosted custon dataset using PEFT (parameter efficient fine tuning) and/or run inference with/without the model adapter, this useful for multiple fine-tuned agents. The generated adapter is here sujitvasanth/TheBloke-openchat-3.5-0106-GPTQ-PEFTadapterJsonSear The custom training dataset used is: https://huggingface.co/datasets/sujitvasanth/jsonsearch2 there is a .csv version in this repository

run GPTQ-finetune.py adjusting the location of your GPTQ model, csv file and where to save the generated model adapter the example dataset improves the json search capabilities of The Bloke's openchat GPTQ

Screenshot from 2022-06-29 03-47-08-crop Screenshot from 2024-02-26 04-26-10

About

train a GPTQ quantised LLM with a local or hosted custon dataset using PEFT (parameter efficient fine tuning) and/or run inference with and without the model adapter


Languages

Language:Python 100.0%