Intel® Gaudi® AI Accelerator 's repositories
Model-References
Reference models for Intel(R) Gaudi(R) AI Accelerator
Gaudi-tutorials
Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://developer.habana.ai/
SynapseAI_Core
SynapseAI Core is a reference implementation of the SynapseAI API running on Habana Gaudi
Setup_and_Install
Setup and Installation Instructions for Habana binaries, docker image creation
Habana_Custom_Kernel
Provides the examples to write and build Habana custom kernels using the HabanaTools
Gaudi-solutions
Full End-to-End examples showing how to use First-gen Gaudi and Gaudi2 in common use cases
hl-thunk-open
Thunk library for HabanaLabs kernel driver
Megatron-DeepSpeed
Intel Gaudi's Megatron DeepSpeed Large Language Models for training
habana-container-runtime
Habana container runtime
habanalabs-k8s-device-plugin
HABANA device plugin for Kubernetes
optimum-habana-fork
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
pytorch-lightning
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.
AutoGPTQ
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
DeepSpeedExamples
Example models using DeepSpeed
drivers.gpu.linux-nic.kernel
NIC drivers (Ethernet, IBverbs and common) for the NIC IP that is inside Intel's data-center GPU
Intel_Gaudi3_Software
Intel® Gaudi® Software is an implementation of the runtime and graph compiler for Gaudi3
neural-compressor
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
rdma-core
RDMA core userspace libraries and daemons