mingwei-liu's starred repositories

openai-cookbook

Examples and guides for using the OpenAI API

Prompt-Engineering-Guide

🐙 Guides, papers, lecture, notebooks and resources for prompt engineering

dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.

Language:TypeScriptLicense:NOASSERTIONStargazers:46990Issues:349Issues:4016

ChatGLM-6B

ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型

Language:PythonLicense:Apache-2.0Stargazers:40462Issues:394Issues:1293
Language:PythonLicense:NOASSERTIONStargazers:34456Issues:309Issues:348

evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.

Language:PythonLicense:NOASSERTIONStargazers:14749Issues:262Issues:207

open-llms

📋 A list of open LLMs available for commercial use.

Language:Jupyter NotebookLicense:MITStargazers:9319Issues:86Issues:30

WizardLM

LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath

open_llama

OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset

starcoder

Home of StarCoder: fine-tuning & inference!

Language:PythonLicense:Apache-2.0Stargazers:7275Issues:70Issues:142

promptfoo

Test your prompts, agents, and RAGs. Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and CI/CD integration.

Language:TypeScriptLicense:MITStargazers:4340Issues:20Issues:631

Chinese-Vicuna

Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca

Language:CLicense:Apache-2.0Stargazers:4143Issues:58Issues:244

ChatGLM-Efficient-Tuning

Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调

Language:PythonLicense:Apache-2.0Stargazers:3649Issues:32Issues:374

BIG-bench

Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models

Language:PythonLicense:Apache-2.0Stargazers:2833Issues:51Issues:150

human-eval

Code for the paper "Evaluating Large Language Models Trained on Code"

Language:PythonLicense:MITStargazers:2338Issues:130Issues:36

click-prompt

ClickPrompt - Streamline your prompt design, with ClickPrompt, you can easily view, share, and run these prompts with just one click. ClickPrompt 用于一键轻松查看、分享和执行您的 Prompt。

Language:TypeScriptLicense:MITStargazers:2288Issues:24Issues:38

GPTeacher

A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer

Language:PythonLicense:MITStargazers:1609Issues:46Issues:5

ChatReviewer

ChatReviewer: 使用ChatGPT分析论文优缺点,提出改进建议

Language:PythonLicense:NOASSERTIONStargazers:1275Issues:3Issues:27

MOSS-RLHF

MOSS-RLHF

Language:PythonLicense:Apache-2.0Stargazers:1274Issues:34Issues:52
Language:PythonLicense:Apache-2.0Stargazers:1214Issues:14Issues:112

LOMO

LOMO: LOw-Memory Optimization

Language:PythonLicense:MITStargazers:975Issues:13Issues:70
Language:Jupyter NotebookLicense:Apache-2.0Stargazers:359Issues:9Issues:39

PromptInject

PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks. 🏆 Best Paper Awards @ NeurIPS ML Safety Workshop 2022

Language:PythonLicense:MITStargazers:297Issues:10Issues:2

codegeex-vscode-extension

VS Code extension for CodeGeeX

Language:TypeScriptLicense:Apache-2.0Stargazers:268Issues:1Issues:11

ClassEval

Benchmark ClassEval for class-level code generation.

Language:PythonLicense:MITStargazers:123Issues:6Issues:16

RLTF

Accepted by Transactions on Machine Learning Research (TMLR)

Language:PythonLicense:BSD-3-ClauseStargazers:115Issues:2Issues:5

SelfCheck

Code for the paper <SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning>

tracking-arxiv

微信公众号:机器感知 | Tracking the Latest Arxiv Papers

oasst-automatic-model-eval

Moved to https://github.com/tju01/ilm-eval

Language:HTMLStargazers:1Issues:1Issues:0