There are 10 repositories under parameter-efficient-tuning topic.
A Unified Library for Parameter-Efficient and Modular Transfer Learning
A curated list of prompt-based paper in computer vision and vision-language learning.
The Paper List of Cross-Modal Matching / Pretraining / Transfering for Preliminary Insight.
A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.
Research Trends in LLM-guided Multimodal Learning.
Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning
Official implementation for CVPR'23 paper "BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning"
Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"
On Transferability of Prompt Tuning for Natural Language Processing
[ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"
[NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model
Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022
[arXiv] Cross-Modal Adapter for Text-Video Retrieval
ZhiJian: A Unifying and Rapidly Deployable Toolbox for Pre-trained Model Reuse
[CVPR 2023] VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal Retrieval
Multi-domain Recommendation with Adapter Tuning
Code for the Findings of NAACL 2022(Long Paper): AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
This is AlpaGasus2-QLoRA based on LLaMA2 with AlpaGasus mechanism using QLoRA!
[NeurIPS-2022] Annual Conference on Neural Information Processing Systems
Evaluate robustness of adaptation methods on large vision-language models
[ICRA 2024] Official Implementation of the Paper "Parameter-efficient Prompt Learning for 3D Point Cloud Understanding"
Code for EACL'23 paper "Udapter: Efficient Domain Adaptation Using Adapters"
This repository contains the source code for the paper "Grouped Pointwise Convolutions Reduce Parameters in Convolutional Neural Networks".
Applied Deep Learning 深度學習之應用 by Vivian Chen 陳縕儂 at NTU CSIE
KR3: Korean Restaurant Review with Ratings / Experiments on Parameter-efficient Tuning and Task-adaptive Pre-training
The code for the paper "Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models" (ICCV'23).
Code for fine-tuning Llama2 LLM with custom text dataset to produce film character styled responses
Code for SAFT: Self-Attention Factor-Tuning, a 16x more efficient solution for fine-tuning neural networks
The code for generating natural distribution shifts on image and text datasets.