HPC-AI Tech's repositories
ColossalAI
Making large AI models cheaper, faster and more accessible
SwiftInfer
Efficient AI Inference & Serving
ColossalAI-Examples
Examples of training models with hybrid parallelism using ColossalAI
PaLM-colossalai
Scalable PaLM implementation of PyTorch
TensorNVMe
A Python library transfers PyTorch tensors between CPU and NVMe
CachedEmbedding
A memory efficient DLRM training solution using ColossalAI
SkyComputing
Sky Computing: Accelerating Geo-distributed Computing in Federated Learning
ColossalAI-Documentation
Documentation for Colossal-AI
Oh-My-Dockerfile
A collection of dockerfiles for various tasks
public_assets
Storing publicly available assets such as images, animations and texts
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
ColossalAI-Platform-CLI
CLI for ColossalAI Platform
mmdetection-examples
Train mmdetection models with ColossalAI.
CANN-Installer
This repository contains Huawei Ascend CANN files
Cloud-Platform-Docs
Documentation for our cloud platform
pytest-testmon
Selects tests affected by changed files. Executes the right tests first. Continuous test runner when used with pytest-watch.
TensorRT-LLM
TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in performant way.
TensorRT-Model-Optimizer
A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM or TensorRT to optimize inference speed.