GUO-QING JIANG (Ageliss)

Ageliss

Geek Repo

Company:Worked at Kuaishou, Baidu, Meituan

Location:Beijing

Home Page:https://ageliss.github.io/gqjiang/

Github PK Tool:Github PK Tool

GUO-QING JIANG's repositories

Auto-GPT

An experimental open-source attempt to make GPT-4 fully autonomous.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

awesome-chatgpt-prompts

This repo includes ChatGPT prompt curation to use ChatGPT better.

Language:HTMLLicense:CC0-1.0Stargazers:0Issues:0Issues:0

Awesome-Deep-Neural-Network-Compression

Summary, Code for Deep Neural Network Quantization

Language:PythonStargazers:0Issues:0Issues:0

BELLE

BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)

License:Apache-2.0Stargazers:0Issues:0Issues:0

ChatGLM-6B

ChatGLM-6B:开源双语对话语言模型

License:Apache-2.0Stargazers:0Issues:0Issues:0

Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)

License:Apache-2.0Stargazers:0Issues:0Issues:0

CLIP

CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

License:MITStargazers:0Issues:0Issues:0

ColossalAI

Making big AI models cheaper, easier, and more scalable

License:Apache-2.0Stargazers:0Issues:0Issues:0

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

License:MITStargazers:0Issues:0Issues:0

DeepSpeedExamples

Example models using DeepSpeed

License:Apache-2.0Stargazers:0Issues:0Issues:0

fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

License:MITStargazers:0Issues:0Issues:0

FastChat

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and FastChat-T5.

License:Apache-2.0Stargazers:0Issues:0Issues:0

langchain

⚡ Building applications with LLMs through composability ⚡

License:MITStargazers:0Issues:0Issues:0

LAVIS

LAVIS - A One-stop Library for Language-Vision Intelligence

License:BSD-3-ClauseStargazers:0Issues:0Issues:0

lightseq

LightSeq: A High Performance Library for Sequence Processing and Generation

License:NOASSERTIONStargazers:0Issues:0Issues:0

llama

Inference code for LLaMA models

License:GPL-3.0Stargazers:0Issues:0Issues:0

llama.cpp

LLM inference in C/C++

License:MITStargazers:0Issues:0Issues:0

Medusa

Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads

License:Apache-2.0Stargazers:0Issues:0Issues:0

Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2

License:NOASSERTIONStargazers:0Issues:0Issues:0

MS-AMP

Microsoft Automatic Mixed Precision Library

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

NeMo

NeMo: a framework for generative AI

License:Apache-2.0Stargazers:0Issues:0Issues:0

openai-cookbook

Examples and guides for using the OpenAI API

License:MITStargazers:0Issues:0Issues:0

peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

License:Apache-2.0Stargazers:0Issues:0Issues:0

rtp-llm

RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.

License:Apache-2.0Stargazers:0Issues:0Issues:0

stanford_alpaca

Code and documentation to train Stanford's Alpaca models, and generate the data.

License:Apache-2.0Stargazers:0Issues:0Issues:0

stylegan-xl

[SIGGRAPH'22] StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

stylegan2-ada-pytorch

StyleGAN2-ADA - Official PyTorch implementation

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

License:Apache-2.0Stargazers:0Issues:0Issues:0

TransformerEngine

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.

License:Apache-2.0Stargazers:0Issues:0Issues:0

trlx

A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)

License:MITStargazers:0Issues:0Issues:0