There are 30 repositories under fine-tuning topic.
LlamaIndex is a data framework for your LLM applications
Unify Efficient Fine-Tuning of 100+ LLMs
Using Low-rank adaptation to quickly fine-tune diffusion models.
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
🦖 𝗟𝗲𝗮𝗿𝗻 about 𝗟𝗟𝗠𝘀, 𝗟𝗟𝗠𝗢𝗽𝘀, and 𝘃𝗲𝗰𝘁𝗼𝗿 𝗗𝗕𝘀 for free by designing, training, and deploying a real-time financial advisor LLM system ~ 𝘴𝘰𝘶𝘳𝘤𝘦 𝘤𝘰𝘥𝘦 + 𝘷𝘪𝘥𝘦𝘰 & 𝘳𝘦𝘢𝘥𝘪𝘯𝘨 𝘮𝘢𝘵𝘦𝘳𝘪𝘢𝘭𝘴
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
LLM Finetuning with peft
A comprehensive guide to building RAG-based LLM applications for production.
Distributed ML Training and Fine-Tuning on Kubernetes
A JAX research toolkit for building, editing, and visualizing neural networks.
WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry
OneTrainer is a one-stop solution for all your stable diffusion training needs.
A repository that contains models, datasets, and fine-tuning techniques for DB-GPT, with the purpose of enhancing model performance in Text-to-SQL
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
This repo contains a PyTorch implementation of a pretrained BERT model for multi-label text classification.
LibFewShot: A Comprehensive Library for Few-shot Learning. TPAMI 2023.
[MICCAI 2019] [MEDIA 2020] Models Genesis
The most easy-to-understand tutorial for using LoRA (Low-Rank Adaptation) within diffusers framework for AI Generation Researchers🔥
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤
Open source data anonymization and synthetic data orchestration for developers. Create high fidelity synthetic data and sync it across your environments.