YuanEric88's starred repositories
llama-recipes
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.
awesome-RLHF
A curated list of reinforcement learning with human feedback resources (continually updated)
streaming-llm
[ICLR 2024] Efficient Streaming Language Models with Attention Sinks
awesome-instruction-datasets
A collection of awesome-prompt-datasets, awesome-instruction-dataset, to train ChatLLM such as chatgpt 收录各种各样的指令数据集, 用于训练 ChatLLM 模型。
AlignBench
大模型多维度中文对齐评测基准 (ACL 2024)
nougat-latex-ocr
Codebase for fine-tuning / evaluating nougat-based image2latex generation models
Claude-API
This project provides an unofficial API for Claude AI, allowing users to access and interact with Claude AI .
awesome-langchain
😎 Awesome list of tools and projects with the awesome LangChain framework
LLM-eval-survey
The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
alpaca-lora
Instruct-tune LLaMA on consumer hardware
EventExtraction-RAAT
Code for NAACL 2022 paper (Main Track) "RAAT: Relation-Augmented Attention Transformer for Relation Modeling in Document-Level Event Extraction"
Summarization-Papers
Summarization Papers
universal-pos-tags
Automatically exported from code.google.com/p/universal-pos-tags
ccf-deadlines
⏰ Collaboratively track deadlines of conferences recommended by CCF (Website, Python Cli, Wechat Applet) / If you find it useful, please star this project, thanks~
transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
mre-in-one-pass
Implementation for Extracting Multiple-Relations in One-Passwith Pre-Trained Transformers