Tingchen Fu's repositories
LlamaFactory
Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)
ICLR24-TransContamination
code for the paper: The Reasonableness Behind Unreasonable Translation Capability of Large Language Model
abel
SOTA Math Opensource LLM
Awesome-LLM
Awesome-LLM: a curated list of Large Language Model
BigcodeEvaluation
A framework for the evaluation of autoregressive code generation language models.
Conference-Acceptance-Rate
Acceptance rates for the major AI conferences
CipherChat
A framework to evaluate the generalization capability of safety alignment for LLMs
GLM-130B
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
ICD
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"
InstructScore
First explanation metric (diagnostic report) for text generation evaluation
LMEvaluationHarness
A framework for few-shot evaluation of autoregressive language models.
Medusa
Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads
mimir
Python package for measuring memorization in LLMs.
OpenCompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (LLaMA, LLaMa2, ChatGLM2, ChatGPT, Claude, etc) over 50+ datasets.
ParroT
The ParroT framework to enhance and regulate the Translation Abilities during Chat based on open-sourced LLMs (e.g., LLaMA-7b, Bloomz-7b1-mt) and human written translation and evaluation data.
PEMArithmetic
Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"
RegMean
Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)
RepE
Representation Engineering: A Top-Down Approach to AI Transparency
RiC
Code for the paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"
SIGIR21-ConvDR
Code repo for SIGIR 2021 paper "Few-Shot Conversational Dense Retrieval"
TangentArithmetic
Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs