GGG-c's starred repositories
peft-llm-code
Replication package of the paper "Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models".
pipeline_peft_for_llms
This repository is about our work, which appear in EMNLP 2023 main.
Stitched_LLaMA
[CVPR 2024] A framework to fine-tune LLaMAs on instruction-following task and get many Stitched LLaMAs with customized number of parameters, e.g., Stitched LLaMA 8B, 9B, and 10B...
ResidualPrompts
Residual Prompt Tuning: a method for faster and better prompt tuning.
PEM_composition
[NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"
PyContinual
PyContinual (An Easy and Extendible Framework for Continual Learning)
PET_Scaling
Exploring the Impact of Model Scaling on Parameter-efficient Tuning Methods
Black-Box-Tuning
ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Models
DiffPruning
Parameter Efficient Transfer Learning with Diff Pruning