ldwang's repositories
ChatDev
Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
CoachLM
Code and data for CoachLM, an automatic instruction revision approach LLM instruction tuning.
danswer
Ask Questions in natural language and get Answers backed by private sources. Connects to tools like Slack, GitHub, Confluence, etc.
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
FastChat
The release repo for "Vicuna: An Open Chatbot Impressing GPT-4"
Flowise
Drag & drop UI to build your customized LLM flow
GeneralAgent
A simple, general, customizable Agent Framework
gpt-fast
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
haystack
:mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
haystack-tutorials
Here you can find all the Tutorials for Haystack 📓
Langchain-Chatchat
Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM 等语言模型的本地知识库问答 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM) QA app with langchain
langflow
⛓️ Langflow is a UI for LangChain, designed with react-flow to provide an effortless way to experiment and prototype flows.
LangGPT
LangGPT: Empowering everyone to become a prompt expert!🚀 Structured Prompt,Language of GPT, 结构化提示词,结构化Prompt
large-sequence-modeling
Transformers with Arbitrarily Large Context, No Approximations
leetcode
🔥LeetCode solutions in any programming language | 多种编程语言实现 LeetCode、《剑指 Offer(第 2 版)》、《程序员面试金典(第 6 版)》题解
LightSeq
Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers
LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning: LLaVA (Large Language-and-Vision Assistant) built towards GPT-4V level capabilities.
llm_finetuning
Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on
long-llms-learning
A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarks
LongBench
LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
MetaGPT
🌟 The Multi-Agent Framework: Given one line Requirement, return PRD, Design, Tasks, Repo
NeMo
NeMo: a toolkit for conversational AI
OpenMoE
A family of open-sourced Mixture-of-Experts (MoE) Large Language Models
Qwen-Agent
Agent framework and applications built upon Qwen, featuring Code Interpreter and Chrome browser extension.
Rethinking-attention
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
whisper
Robust Speech Recognition via Large-Scale Weak Supervision
XAgent
An Autonomous LLM Agent for Complex Task Solving