neos's repositories
Chat-Haruhi-Suzumiya
Chat凉宫春日, An open sourced Role-Playing chatbot Cheng Li, Ziang Leng, and others.
cpp_interview
c++后台服务器开发面经或八股总结!(有深度有广度,和仅有概念的总结文章不同!)
Cradle
The Cradle framework is a first attempt at General Computer Control (GCC). Cradle supports agents to ace any computer task by enabling strong reasoning abilities, self-improvment, and skill curation, in a standardized general environment with minimal requirements.
DistServe
Disaggregated serving system for Large Language Models (LLMs).
FunClip
Open-source, accurate and easy-to-use video clipping tool, LLM based AI clipping intergrated || 开源、精准、方便的视频切片工具,集成了大语言模型AI智能剪辑功能
GPTSwarm
🐝 GPTSwarm: LLM agents as (Optimizable) Graphs
graph-of-thoughts
Official Implementation of "Graph of Thoughts: Solving Elaborate Problems with Large Language Models"
Inpaint-Anything
Inpaint anything using Segment Anything and inpainting models.
LazyLLM
Easyest and lazyest way for building multi-agent LLMs applications.
lectures
Material for cuda-mode lectures
llm-on-ray
Pretrain, finetune and serve LLMs on Intel platforms with Ray
llm_interview_note
主要记录大语言大模型(LLMs) 算法(应用)工程师相关的知识及面试题
mllm
Fast Multimodal LLM on Mobile Devices
mpv-upscale-2x_animejanai
Real-time anime upscaling to 4k in mpv with Real-ESRGAN compact models
PromptIR
PromptIR: Prompting for All-in-One Blind Image Restoration [NeurIPS 2023]
ProPainter
[ICCV 2023] ProPainter: Improving Propagation and Transformer for Video Inpainting
qllm-eval
Code Repository of Evaluating Quantized Large Language Models
shezhangbujianle
借助世界上最大的单体中文NLP大模型,我们做出了一个可以跟人类玩“剧本杀”的AI……
swiftLLM
A tiny yet powerful LLM inference system tailored for researching purpose
tiny-flash-attention
flash attention tutorial written in python, triton, cuda, cutlass
tiny-universe
《大模型白盒子构建指南》:一个全手搓的Tiny-Universe
video-subtitle-remover
基于AI的图片/视频硬字幕去除、文本水印去除,无损分辨率生成去字幕、去水印后的图片/视频文件。无需申请第三方API,本地实现。AI-based tool for removing hard-coded subtitles and text-like watermarks from videos or Pictures.
vidur
A large-scale simulation framework for LLM inference