Han Zhou's repositories
Alpaca-CoT
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. Meanwhile, we created a new branch to build a Tabular LLM.(我们分别统一了丰富的IFT数据(如CoT数据,目前仍不断扩充)、多种训练效率方法(如lora,p-tuning)以及多种LLMs,三个层面上的接口,打造方便研究人员上手的LLM-IFT研究平台。同时tabular_llm分支构建了面向表格智能任务的LLM。
alpaca-lora
Instruct-tune LLaMA on consumer hardware
awesome-adapter-resources
Collection of Tools and Papers related to Adapters (aka Parameter-Efficient Transfer Learning/ Fine-Tuning)
Awesome-LLM-Prompt-Optimization
Awesome-LLM-Prompt-Optimization: a curated list of advanced prompt optimization and tuning methods in Large Language Models
Awesome-Parameter-Efficient-Transfer-Learning
A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.
Black-Box-Tuning
ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Models
multi3woz_ltl
The official repository for Multi3WOZ: A Multilingual, Multi-Domain, Multi-Parallel Dataset for Training and Evaluating Culturally Adapted Task-Oriented Dialog Systems (Hu et al., to appear; TACL)
Awesome-LLM-Uncertainty-Reliability-Robustness
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
Channel-LM-Prompting
An original implementation of "Noisy Channel Language Model Prompting for Few-Shot Text Classification"
DoLa
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"
function_vectors
Function Vectors in Large Language Models [ICLR 2024]
GrIPS
Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"
ICV
Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering
LLM-Safeguard
Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
ProPETL
One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning
pyreft
ReFT: Representation Finetuning for Language Models
rl-prompt
Accompanying repo for the RLPrompt paper
S3Delta
code for paper Sparse Structure Search for Delta Tuning
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.