iLeGend's repositories
CINN
Compiler Infrastructure for Neural Networks
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
course-1
The Hugging Face course on Transformers
Effective-Modern-Cpp
Reading notes
HDRUNet
CVPR2021 Workshop - HDRUNet: Single Image HDR Reconstruction with Denoising and Dequantization.
intel-extension-for-transformers
Extending Hugging Face transformers APIs for Transformer-based models and improve the productivity of inference deployment. With extremely compressed models, the toolkit can greatly improve the inference efficiency on Intel platforms.
Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
Model-References
TensorFlow and PyTorch Reference models for Gaudi(R)
neural-style
Neural style in TensorFlow! :art:
oneAPI-samples
Samples for Intel oneAPI toolkits
openvino
OpenVINO™ Toolkit repository
Paddle_docs
Documentations for PaddlePaddle
PaddleCustomDevice
PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)
PaddleFleetX
Paddle Distributed Training Examples. 飞桨分布式训练示例 Resnet Bert GPT MOE DataParallel ModelParallel PipelineParallel HybridParallel AutoParallel Zero Sharding Recompute GradientMerge Offload AMP DGC LocalSGD Wide&Deep
paddlefx
an experimental project for paddle python IR.
PaddleSlim
PaddleSlim is an open-source library for deep model compression and architecture search.
PaddleSOT
A Bytecode level Implementation of Symbolic OpCode Translator For PaddlePaddle
tensorflow-resources
Curated Tensorflow code resources to help you get started