Xiao Wang's starred repositories
gpt-researcher
GPT based autonomous agent that does online comprehensive research on any given topic
Awesome-Multimodal-Large-Language-Models
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
lm-evaluation-harness
A framework for few-shot evaluation of language models.
opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
llm-attacks
Universal and Transferable Attacks on Aligned Language Models
awesome-RLHF
A curated list of reinforcement learning with human feedback resources (continually updated)
Torch-Pruning
[CVPR 2023] Towards Any Structural Pruning; LLMs / SAM / Diffusion / Transformers / YOLOv8 / CNNs
iFakeLocation
Simulate locations on iOS devices on Windows, Mac and Ubuntu.
Safety-Prompts
Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。
LLM-Pruner
[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support LLaMA, Llama-2, BLOOM, Vicuna, Baichuan, etc.
AlignLLMHumanSurvey
Aligning Large Language Models with Human: A Survey
Woodpecker
✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.
LLaVA-RLHF
Aligning LMMs with Factually Augmented RLHF
NLP4SocialGood_Papers
A reading list of up-to-date papers on NLP for Social Good.
ContinualLM
An Extensible Continual Learning Framework Focused on Language Models (LMs)
transpeeder
train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism
llama-pipeline-parallel
A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to copy code and launch discussions about the problems you have encoured.
ShadowAlignment
Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models
Red-Teaming-Language-Models-with-Language-Models
A re-implementation of the "Red Teaming Language Models with Language Models" paper by Perez et al., 2022