There are 38 repositories under jax topic.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Deep Learning for humans
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
🏆 A ranked list of awesome machine learning Python libraries. Updated weekly.
Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
It is my belief that you, the postgraduate students and job-seekers for whom the book is primarily meant will benefit from reading it; however, it is my hope that even the most experienced researchers will find it fascinating as well.
TFDS is a collection of datasets ready to use with TensorFlow, Jax, ...
JAX implementation of OpenAI's Whisper model for up to 70x speed-up on TPU.
Scenic: A Jax Library for Computer Vision Research and Beyond
Training and serving large-scale neural networks with auto parallelization.
JAX-based neural network library
Functional programming language for signal processing and sound synthesis
Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
Fast and Easy Infinite Neural Networks in Python
Monte Carlo tree search in JAX
Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2023
PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural network.
Elegant easy-to-use neural networks + scientific computing in JAX. https://docs.kidger.site/equinox/
Notes, examples, and Python demos for the 2nd edition of the textbook "Machine Learning Refined" (published by Cambridge University Press).
Official repository for the "Big Transfer (BiT): General Visual Representation Learning" paper.
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
Distributed ML Training and Fine-Tuning on Kubernetes