shizhediao's repositories
ChatGPTPapers
Must-read papers, related blogs and API tools on the pre-training and tuning methods for ChatGPT.
active-prompt
Source code for the paper "Active Prompting with Chain-of-Thought for Large Language Models"
Black-Box-Prompt-Learning
Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"
automate-cot
Source code for the paper "Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data"
HashTation
Source code for the paper "Hashtag-Guided Low-Resource Tweet Classification"
Awesome-Chinese-LLM
整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
Chain-of-ThoughtsPapers
A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".
directional-preference-alignment
Directional Preference Alignment
LLMs-In-China
**大模型
openai-cookbook
Examples and guides for using the OpenAI API
PromptPapers
Must-read papers on prompt-based tuning for pre-trained language models.
awesome-ChatGPT-resource-zh
精选 OpenAI 的 [ChatGPT](https://chat.openai.com) 资源清单, 跟随最新资源并添加中文相关Work
awesome-RLHF
A curated list of reinforcement learning with human feedback resources (continually updated)
bolei_awesome_posters
CVPR and NeurIPS poster examples and templates. May we have in-person poster session soon!
chatgpt-clone
this app is a ChatGPT Clone with DALL.E using OpenAIs text-davinci-003 and image generation Model
ChatGPT-Next-Web
A well-designed cross-platform ChatGPT UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT 应用。
Instruction-Tuning-Papers
Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
keyphrase-generation-multigrain-attention
Keyphrase Generation with Cross-Document Attention
MiniGPT-4
MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models
natural-instructions
Expanding natural instructions
Paper-Picture-Writing-Code
Paper Picture Writing Code
X-VLM
X-VLM: Multi-Grained Vision Language Pre-Training