There are 5 repositories under finetuning-llms topic.
a friendly neighborhood repository with diverse experiments and adventures in the world of LLMs
Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).
Fine-tune Mistral 7B to generate fashion style suggestions
A Gradio web UI for Large Language Models. Supports LoRA/QLoRA finetuning,RAG(Retrieval-augmented generation) and Chat
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
End to End Generative AI Industry Projects on LLM Models with Deployment
Finetuning Google's Gemma Model for Translating Natural Language into SQL
qwen-1.5-1.8B sentiment analysis with prompt optimization and qlora fine-tuning
A open-source framework designed to adapt pre-trained Language Models (LLMs), such as Llama, Mistral, and Mixtral, to a wide array of domains and languages.
Code Wizard is a coding companion/ code generation tool empowered by CodeLLama-v2-34B AI to automatically generate and enhance code based on best practices found in your GitHub repository.
Fine-tuning of language models and prompt engineering, using the problem setting of stock price prediction based on high-frequency OHLC stock price data for AAPL.Training gpt-3.5-turbo on OHLC data to obtain raw return and log return predictions.
Code for fine-tuning Llama2 LLM with custom text dataset to produce film character styled responses
Fine tune Phi 2 for persona grounded chat
An audio journaling app that provides AI analysis for your journal entries
This repository contains code for fine-tuning the LLama3 8b model using Alpaca prompts to generate Java codes. The code is based on a Google Colab notebook.
Experiments with the Meta-Llama-3-8B
Finetuning OpenCodeInterpreter-DS-6.7B for Text-to-SQL Code Generation on a Single A100 GPU
This Repo contains How to Finetune Google's New Gemma LLm model using your custom instuction dataset. I have finetuned Gemma 2b instuct Model on 20k medium articles data for 5hrs on kaggle p100 GPU
finetuning t5-base model for detoxifying texts.
(In-progress) Finetuning OpenAI's GPT-3.5-Turbo as a base model on open-source data about the Tampa Bay region to create a chatbot specializing in information on the area!
The repository contains the code that is used to create the instruct style dataset of telugu news articles.
This repo contains influential papers which use finetuning techniques for LLMs for domain specific tasks.
This repository showcases Python scripts demonstrating interactions with various models using the LangChain library. From fine-tuning to custom runnables, explore examples with Gemini, Hugging Face, and Mistral AI models.
Fine-tuned FLAN T-5 using Instruction Fine-Tuning (Full), LoRA-based PEFT, and RLHF with PPO
Clone from the following URL as the model file is too large to be uploaded in github.
Jupyter notebooks from "Finetune LLMs" course at deeplearning.ai
Fine-Tuned Language Models Exploration using LoRA and Hugging Face's Transformers Library
The LARGE LANGUAGE MODEL FOR HYDROGEN STORAGE project uses advanced natural language processing to improve research efficiency. It offers concise summaries and answers questions about hydrogen storage research papers, helping users quickly understand key insights and latest advancements.