There are 10 repositories under llm-training topic.
SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
Code examples and resources for DBRX, a large language model developed by Databricks
DLRover: An Automatic Distributed Deep Learning System
Nvidia GPU exporter for prometheus using nvidia-smi binary
LLM-PowerHouse: Unleash LLMs' potential through curated tutorials, best practices, and ready-to-use code for custom training and inferencing.
irresponsible innovation. Try now at https://chat.dev/
Repo for fine-tuning Casual LLMs
The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.
Open Source LLM toolkit to build trustworthy LLM applications. TigerArmor (AI safety), TigerRAG (embedding, RAG), TigerTune (fine-tuning)
LLM (Large Language Model) FineTuning
Tune LLM in few lines of code
Finetune LLMs on K8s by using Runbooks
Sequence Parallel Attention for Long Context LLM Model Training and Inference
Collection of best practices, reference architectures, model training examples and utilities to train large models on AWS.
a LLM cookbook, for building your own from scratch, all the way from gathering data to training a model
Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA
Repository for organizing datasets and papers used in Open LLM.
A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ
Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).
npm like package ecosystem for Prompts 🤖
A Python package for automatically training and comparing language models.
Sotopia-π: Interactive Learning of Socially Intelligent Language Agents (ACL 2024)
CompanionLLM - A framework to finetune LLMs to be your own sentient conversational companion
This repository contains resources for accessing the official benchmarks, codes, and checkpoints of the paper: "[**Breaking Language Barriers in Multilingual Mathematical Reasoning: Insights and Observations**]".