Dino Chen's repositories
commitlint
📓 Lint commit messages
dc3671.github.io
Blog.
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
DeepSpeed-MII
MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
eslint-config-alloy
AlloyTeam ESLint 规则
inference
Reference implementations of MLPerf™ inference benchmarks
intel-extension-for-transformers
Extending Hugging Face transformers APIs for Transformer-based models and improve the productivity of inference deployment. With extremely compressed models, the toolkit can greatly improve the inference efficiency on Intel platforms.
Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
Models
采用MegEngine实现的各种主流深度学习模型
os_exercises
清华大学OS公开课——课后练习
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
transformers-bloom-inference
Fast Inference Solutions for BLOOM
ucore_lab
Lab Codes for MOOC OS course in Tsinghua University.
vim-autoformat
Provide easy code formatting in Vim by integrating existing code formatters.
vux2
A full-featured Webpack + vue-loader setup with hot reload, linting, testing & css extraction.