There are 11 repositories under tpu topic.
A high-throughput and memory-efficient inference and serving engine for LLMs
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
SkyPilot: Run AI and batch jobs on any infra (Kubernetes or 12+ clouds). Get unified execution, cost savings, and high GPU availability via a simple interface.
Fast and flexible AutoML with learning guarantees.
Everything we actually know about the Apple Neural Engine (ANE)
Large-scale LLM inference engine
Everything you want to know about Google Cloud TPU
Neural network-based chess engine capable of natural language commentary
Dual Edge TPU Adapter to use it on a system with single PCIe port on m.2 A/B/E/M slot
JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome).
DECIMER Image Transformer is a deep-learning-based tool designed for automated recognition of chemical structure images. Leveraging transformer architectures, the model converts chemical images into SMILES strings, enabling the digitization of chemical data from scanned documents, literature, and patents.
Benchmarking suite to evaluate 🤖 robotics computing performance. Vendor-neutral. ⚪Grey-box and ⚫Black-box approaches.
🖼 Training StyleGAN2 on TPUs in JAX
Small-scale Tensor Processing Unit built on an FPGA
EfficientNet, MobileNetV3, MobileNetV2, MixNet, etc in JAX w/ Flax Linen and Objax
Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload
TPU ile Yapay Sinir Ağlarınızı Çok Daha Hızlı Eğitin
FREE TPU V3plus for FPGA is the free version of a commercial AI processor (EEP-TPU) for Deep Learning EDGE Inference
Unofficial implementation of Octave Convolutions (OctConv) in TensorFlow / Keras.
xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerators such as TPUs and GPUs on GKE.
TF2 implementation of knowledge distillation using the "function matching" hypothesis from https://arxiv.org/abs/2106.05237.
Edge TPU Accelerator / Multi-TPU + MobileNet-SSD v2 + Python + Async + LattePandaAlpha/RaspberryPi3/LaptopPC
Repository for Google Summer of Code 2019 https://summerofcode.withgoogle.com/projects/#4662790671826944
Testing framework for Deep Learning models (Tensorflow and PyTorch) on Google Cloud hardware accelerators (TPU and GPU)
<케라스 창시자에게 배우는 딥러닝 2판> 도서의 코드 저장소
Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP
:dart: Accumulated Gradients for TensorFlow 2
🪐 The Sebulba architecture to scale reinforcement learning on Cloud TPUs in JAX