yueliu1999 / Awesome-LoRA

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Awesome-LoRA

Awesome-LoRA is a collection of state-of-the-art (SOTA), novel low-rank adaptation methods (papers, codes and datasets). Any other interesting papers and codes are welcome. Any problems, please contact jiyuheng2023@ia.ac.cn. If you find this repository useful to your research or work, it is really appreciated to star this repository. ✨

Made with Python GitHub stars GitHub forks visitors


What's LoRA (Low-Rank Adaptation)?

LoRA is an efficient finetuning technique proposed by Microsoft researchers to adapt large models to specific tasks and datasets.

The pioneering paper

Year Title Venue Paper Code
2022 LoRA: Low-Rank Adaptation of Large Language Models ICLR Link Link

Important Survey Papers

Year Title Venue Paper Code
- - - - -

Papers

Year Title Venue Paper Code
2024 AdvLoRA: Adversarial Low-Rank Adaptation of Vision-Language Models arXiv Link -
2024 Parameter-Efficient Fine-Tuning with Discrete Fourier Transform ICML Link Link
2024 LoNAS: Elastic Low-Rank Adapters for Efficient Large Language COLING Link Link
2024 LoRA Learns Less and Forgets Less arXiv Link -
2024 MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning arXiv Link Link
2024 LoRA+: Efficient Low Rank Adaptation of Large Models arXiv Link Link
2024 PeriodicLoRA: Breaking the Low-Rank Bottleneck in LoRA Optimization arXiv Link -
2024 Derivative-Free Optimization for Low-Rank Adaptation in Large Language Models arXiv Link Link
2024 Multi-LoRA Composition for Image Generation arXiv Link Link
2024 BiLoRA: A Bi-level Optimization Framework for Overfitting-Resilient Low-Rank Adaptation of Large Pre-trained Models arXiv Link -
2024 AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models arXiv Link -
2024 LoRA Meets Dropout under a Unified Framework arXiv Link -
2024 MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning arXiv Link Link
2024 Galore: Memory-efficient llm training by gradient low-rank projection ICML Link Link
2024 Let's Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model arXiv Link -
2024 LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning arXiv Link -
2023 DyLoRA: Parameter-Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation EACL Link Link
2023 The expressive power of low-rank adaptation ICLR Link Link
2023 Exploring the impact of low-rank adaptation on the performance, efficiency, and regularization of RLHF arXiv Link Link
2023 Deep Learning Model Compression With Rank Reduction in Tensor Decomposition TNNLS Link -
2023 Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment arXiv Link -
2023 Bayesian Low-rank Adaptation for Large Language Models ICLR Link Link
2023 Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning arXiv Link -
2023 Motion Style Transfer: Modular Low-Rank Adaptation for Deep Motion Forecasting PMLR Link Link
2023 Sparse low-rank adaptation of pre-trained language models EMNLP Link Link
2023 Low-Rank Adaptation of Large Language Model Rescoring for Parameter-Efficient Speech Recognition ASRU Link -
2023 SiRA: Sparse Mixture of Low Rank Adaptation arXiv Link -
2022 LoRA: Low-Rank Adaptation of Large Language Models ICLR Link Link

Others

Year Title Venue Paper Code
2021 Compacter: Efficient low-rank hypercomplex adapter layers NeurIPS Link Link

Packages

Huggingface PEFT Link

About