Ming Zhou (KornbergFresnel)

KornbergFresnel

Geek Repo

Company:Shanghai AI Lab

Location:Shanghai, China

Home Page:mingzak.com

Twitter:@mzhou_cs

Github PK Tool:Github PK Tool


Organizations
APEXLAB
apexrl
sjtu-marl

Ming Zhou's starred repositories

langchain

🦜🔗 Build context-aware reasoning applications

Language:PythonLicense:MITStargazers:86997Issues:666Issues:6933

fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Language:PythonLicense:MITStargazers:29574Issues:425Issues:4151

Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM 等语言模型的本地知识库问答 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM) QA app with langchain

Language:PythonLicense:Apache-2.0Stargazers:28832Issues:268Issues:3299

YesPlayMusic

高颜值的第三方网易云播放器,支持 Windows / macOS / Linux :electron:

chatbot-ui

AI chat for every model.

Language:TypeScriptLicense:MITStargazers:26986Issues:242Issues:926

MiniGPT-4

Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)

Language:PythonLicense:BSD-3-ClauseStargazers:25063Issues:219Issues:448

LLaMA-Factory

Unify Efficient Fine-Tuning of 100+ LLMs

Language:PythonLicense:Apache-2.0Stargazers:24099Issues:165Issues:3834

LazyVim

Neovim config for the lazy

Language:LuaLicense:Apache-2.0Stargazers:14565Issues:58Issues:1008

arrow

Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing

Language:C++License:Apache-2.0Stargazers:13767Issues:351Issues:24216

RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

Language:PythonLicense:Apache-2.0Stargazers:11854Issues:136Issues:195

LoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

Language:PythonLicense:MITStargazers:9510Issues:64Issues:102

trl

Train transformer language models with reinforcement learning.

Language:PythonLicense:Apache-2.0Stargazers:8482Issues:78Issues:942

PowerInfer

High-speed Large Language Model Serving on PCs with Consumer-grade GPUs

Language:C++License:MITStargazers:7082Issues:75Issues:135

DeepSpeedExamples

Example models using DeepSpeed

Language:PythonLicense:Apache-2.0Stargazers:5780Issues:75Issues:522

awesome-chatgpt-api

Curated list of apps and tools that not only use the new ChatGPT API, but also allow users to configure their own API keys, enabling free and on-demand usage of their own quota.

InternLM

Official release of InternLM2 7B and 20B base and chat models. 200K context support

Language:PythonLicense:Apache-2.0Stargazers:5436Issues:49Issues:291

ToolBench

[ICLR'24 spotlight] An open platform for training, serving, and evaluating large language model for tool learning.

Language:PythonLicense:Apache-2.0Stargazers:4533Issues:50Issues:264

trlx

A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)

Language:PythonLicense:MITStargazers:4368Issues:49Issues:284

open_flamingo

An open-source framework for training large multimodal models.

Language:PythonLicense:MITStargazers:3518Issues:47Issues:170

xtuner

An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)

Language:PythonLicense:Apache-2.0Stargazers:3007Issues:30Issues:384

torchscale

Foundation Architecture for (M)LLMs

Language:PythonLicense:MITStargazers:2952Issues:46Issues:75

awesome-RLHF

A curated list of reinforcement learning with human feedback resources (continually updated)

AgentBench

A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)

Language:PythonLicense:Apache-2.0Stargazers:1939Issues:29Issues:122

MetaTransformer

Meta-Transformer for Unified Multimodal Learning

Language:PythonLicense:Apache-2.0Stargazers:1459Issues:22Issues:65

RetNet

An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"

Language:PythonLicense:MITStargazers:1135Issues:13Issues:26

orbit

Unified framework for robot learning built on NVIDIA Isaac Sim

Language:PythonLicense:NOASSERTIONStargazers:898Issues:21Issues:317

iGibson

A Simulation Environment to train Robots in Large Realistic Interactive Scenes

Language:PythonLicense:MITStargazers:617Issues:40Issues:330

octo

Octo is a transformer-based robot policy trained on a diverse mix of 800k robot trajectories.

Language:PythonLicense:MITStargazers:563Issues:16Issues:76
Language:PythonLicense:MITStargazers:21Issues:3Issues:0

gear

A distributed GPU-centric experience replay system for large AI models.

Language:C++Stargazers:12Issues:0Issues:0