Sangkug Lym's repositories
TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference.
NeMo-Megatron-Launcher
NeMo Megatron launcher and tools
Megatron-LM
Ongoing research training transformer models at scale
pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Language:PythonNOASSERTION000
lightning
Build and train PyTorch models and connect them to the ML lifecycle using Lightning App templates, without handling DIY infrastructure, cost management, scaling, and other headaches.
Apache-2.0000
Language:HTML000