There are 1 repository under parameter-efficient topic.
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.12410).
K-CAI NEURAL API - Keras based neural network API that will allow you to create parameter-efficient, memory-efficient, flops-efficient multipath models with new layer types. There are plenty of examples and documentation.
Frame Flexible Network (CVPR2023)
Official source code for the paper "Tailored Design of Audio-Visual Speech Recognition Models using Branchformers"
This repository contains the source code for the paper "Grouped Pointwise Convolutions Reduce Parameters in Convolutional Neural Networks".
Code for AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP tasks
How many parameters are needed to get 99% on MNIST? Personal record of 697 parameters.
A modular and extensible LoRA fine-tuning framework for question-answering tasks with PEFT integration