There are 0 repository under finetuning topic.
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
A PyTorch Library for Meta-learning Research
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
Interact with your SQL database, Natural Language to SQL using LLMs
Curated tutorials and resources for Large Language Models, Text2SQL, Text2DSL、Text2API、Text2Vis and more.
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
Finetuning large language models for GDScript generation.
优质稳定的OpenAI的API接口-For企业和开发者。OpenAI的api proxy,支持ChatGPT的API调用,支持openai的API接口,支持:gpt-4,gpt-3.5。不需要openai Key, 不需要买openai的账号,不需要美元的银行卡,通通不用的,直接调用就行,稳定好用!!智增增
[IJCAI 2023 survey track]A curated list of resources for chemical pre-trained models
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
End-to-End recipes for pre-training and fine-tuning BERT using Azure Machine Learning Service
Provide best practices for LMOps, as well as elegant and convenient access to the features of the Qianfan MaaS Platform. (提供大模型工具链最佳实践,以及优雅且便捷地访问千帆大模型平台)
🔥 Korean GPT-2, KoGPT2 FineTuning cased. 한국어 가사 데이터 학습 🔥
Tune LLM in few lines of code
llama2 finetuning with deepspeed and lora
chatglm2 6b finetuning and alpaca finetuning
Fine-tune Facebook's DETR (DEtection TRansformer) on Colaboratory.
a friendly neighborhood repository with diverse experiments and adventures in the world of LLMs
Praetor is a lightweight finetuning data and prompt management tool
Finetune mistral-7b-instruct for sentence embeddings
A PyTorch Lightning extension that accelerates and enhances foundation model experimentation with flexible fine-tuning schedules.
Finetune any model on HF in less than 30 seconds
Official implementation of DPFM @ ICLR 2024 paper "AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts" (Huggingface Daily Papers: https://huggingface.co/papers/2402.07625)