Enming Yuan's starred repositories
awesome-chatgpt-prompts
This repo includes ChatGPT prompt curation to use ChatGPT better.
gpt_academic
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, moss等。
ColossalAI
Making large AI models cheaper, faster and more accessible
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
tuning_playbook
A playbook for systematically maximizing the performance of deep learning models.
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
bob-plugin-openai-translator
基于 ChatGPT API 的文本翻译、文本润色、语法纠错 Bob 插件,让我们一起迎接不需要巴别塔的新时代!Licensed under CC BY-NC-SA 4.0
direct-preference-optimization
Reference implementation for DPO (Direct Preference Optimization)
MEGABYTE-pytorch
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
Reference-arithmetic-coding
Clear implementation of arithmetic coding for educational purposes in Java, Python, C++.
CoLT5-attention
Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch
simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT