Ma-Dan's repositories

Llama2-CoreML

Llama2 for iOS implemented using CoreML.

Language:Objective-C++License:MITStargazers:6Issues:0Issues:0

alpaca-lora

Instruct-tune LLaMA on consumer hardware

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Auto-GPT

An experimental open-source attempt to make GPT-4 fully autonomous.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
Language:CudaStargazers:0Issues:0Issues:0

ChatGLM-6B

ChatGLM-6B:开源双语对话语言模型 | An Open Bilingual Dialogue Language Model

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

ChatGLM-Tuning

一种平价的chatgpt实现方案, 基于ChatGLM-6B + LoRA

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

ChatRWKV

ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Chinese-alpaca-lora

骆驼:A Chinese finetuned instruction LLaMA. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU部署 (Chinese LLaMA & Alpaca LLMs)

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

ControlVideo

Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

DB-GPT-Hub

A repository that contains models, datasets, and fine-tuning techniques for DB-GPT, with the purpose of enhancing model performance, especially in Text-to-SQL.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

DeepSpeedExamples

Example models using DeepSpeed

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

InstructGLM

ChatGLM-6B 指令学习|指令数据|Instruct

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
License:Apache-2.0Stargazers:0Issues:0Issues:0

kaldifeat

Kaldi-compatible online & offline feature extraction with PyTorch, supporting CUDA, batch processing, chunk processing, and autograd - Provide C++ & Python API

Language:C++License:NOASSERTIONStargazers:0Issues:0Issues:0

llama

Inference code for LLaMA models

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

llama-recipes

Examples and recipes for Llama 2 model

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

LLaSM

第一个支持中英文双语语音-文本多模态对话的开源可商用对话模型。便捷的语音输入将大幅改善以文本为输入的大模型的使用体验,同时避免了基于 ASR 解决方案的繁琐流程以及可能引入的错误。

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

MatmulTutorial

A Easy-to-understand TensorOp Matmul Tutorial

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

MotionPlanning

Motion planning algorithms commonly used on autonomous vehicles. (path planning + path tracking)

Stargazers:0Issues:0Issues:0

multi_agent_path_planning

Python implementation of a bunch of multi-robot path-planning algorithms.

License:MITStargazers:0Issues:0Issues:0

nebullvm

Plug and play modules to optimize the performances of your AI systems 🚀

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Qbot

[🔥updating ...] AI 自动量化交易机器人 Qbot is an AI-oriented quantitative investment platform, which aims to realize the potential, empower AI technologies in quantitative investment. 📃 online docs: https://ufund-me.github.io/Qbot ✨ :news: qbot-mini: https://github.com/Charmve/iQuant

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

RWKV-LM

RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

stanford_alpaca

Code and documentation to train Stanford's Alpaca models, and generate the data.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonLicense:MITStargazers:0Issues:0Issues:0

zero_nlp

中文nlp应用(数据、模型、训练、推理)

License:MITStargazers:0Issues:0Issues:0

Zhongjing

A Chinese medical ChatGPT based on LLaMa, training from large-scale pretrain corpus and multi-turn dialogue dataset.

Language:ShellLicense:Apache-2.0Stargazers:0Issues:0Issues:0