Donglin Zhuang's repositories
web-server
Web Server based on Boost asio
MIT6.824-Distributed-System
MIT6.828 Lab code
attention-is-all-you-need-pytorch
A PyTorch implementation of the Transformer model in "Attention is All You Need".
awesome-tensor-compilers
A list of awesome compiler projects and papers for tensor computation and deep learning.
beautiful-hexo
hexo theme ported from beautiful-jekyll.
Capsule-Network-Tutorial
Pytorch easy-to-follow Capsule Network tutorial
d2l-tvm
Dive into Deep Learning Compiler
deepcaps
Official Implementation of "DeepCaps: Going Deeper with Capsule Networks" paper (CVPR 2019).
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
FlexGen
Running large language models on a single GPU for throughput-oriented scenarios.
google-research
Google Research
Kaleidoscope
Compiler based on LLVM
loss-landscape
Code for visualizing the loss landscape of neural nets
models
Models and examples built with TensorFlow
pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
resnet-in-tensorflow
Re-implement Kaiming He's deep residual networks in tensorflow. Can be trained with cifar10.
sglang
SGLang is a structured generation language designed for large language models (LLMs). It makes your interaction with models faster and more controllable.
sputnik
A library of GPU kernels for sparse matrix operations.
tensor2tensor
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
tensorflow
An Open Source Machine Learning Framework for Everyone
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
trax
Trax — Deep Learning with Clear Code and Speed
tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators