There are 4 repositories under parameter-efficient-fine-tuning topic.
Collection of awesome parameter-efficient fine-tuning resources.
The Paper List of Large Multi-Modality Model, Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matching for Preliminary Insight.
[SIGIR'24] The official implementation code of MOELoRA.
[ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"
This is the official repository of the papers "Parameter-Efficient Transfer Learning of Audio Spectrogram Transformers" and "Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters".
Exploring the potential of fine-tuning Large Language Models (LLMs) like Llama2 and StableLM for medical entity extraction. This project focuses on adapting these models using PEFT, Adapter V2, and LoRA techniques to efficiently and accurately extract drug names and adverse side-effects from pharmaceutical texts
[ICRA 2024] Official Implementation of the Paper "Parameter-efficient Prompt Learning for 3D Point Cloud Understanding"
[WACV 2024] MACP: Efficient Model Adaptation for Cooperative Perception.
Fine tuning Mistral-7b with PEFT(Parameter Efficient Fine-Tuning) and LoRA(Low-Rank Adaptation) on Puffin Dataset(multi-turn conversations between GPT-4 and real humans)
Code for the EACL 2024 paper: "Small Language Models Improve Giants by Rewriting Their Outputs"
This repository contains the lab work for Coursera course on "Generative AI with Large Language Models".
A Production-Ready, Scalable RAG-powered LLM-based Context-Aware QA App