yejingxin's repositories

Language:PythonStargazers:1Issues:0Issues:0

ai-on-gke

AI on GKE is a collection of examples, best-practices, and prebuilt solutions to help build, deploy, and scale AI Platforms on Google Kubernetes Engine

Language:HCLLicense:Apache-2.0Stargazers:0Issues:0Issues:0

diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

jax

Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0

jaxformer

Minimal library to train LLMs on TPU in JAX with pjit().

License:BSD-3-ClauseStargazers:0Issues:0Issues:0

litgpt

20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.

License:Apache-2.0Stargazers:0Issues:0Issues:0

maxtext

A simple, performant and scalable Jax LLM!

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Megatron-LM

Ongoing research training transformer models at scale

License:NOASSERTIONStargazers:0Issues:0Issues:0

ml-testing-accelerators

Testing framework for Deep Learning models (Tensorflow and PyTorch) on Google Cloud hardware accelerators (TPU and GPU)

Language:JsonnetLicense:Apache-2.0Stargazers:0Issues:0Issues:0

NeMo

A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

orbax

Orbax provides common utility libraries for JAX users.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
License:Apache-2.0Stargazers:0Issues:0Issues:0

pytorch-lightning

Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.

License:Apache-2.0Stargazers:0Issues:0Issues:0

ray

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a toolkit of libraries (Ray AIR) for accelerating ML workloads.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
License:Apache-2.0Stargazers:0Issues:0Issues:0

serving

A flexible, high-performance serving system for machine learning models

License:Apache-2.0Stargazers:0Issues:0Issues:0
License:Apache-2.0Stargazers:0Issues:0Issues:0

tf-lingvo

Lingvo

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

TinyLlama

The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.

License:Apache-2.0Stargazers:0Issues:0Issues:0

tpu-tools

Reference models and tools for Cloud TPUs.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

UCSD_BigData

A repository for scripts and notebooks for the UCSD big data course

Language:PythonStargazers:0Issues:0Issues:0

veScale

A PyTorch Native LLM Training Framework

License:Apache-2.0Stargazers:0Issues:0Issues:0

xla

Enabling PyTorch on Google TPU

License:NOASSERTIONStargazers:0Issues:0Issues:0

xpk

xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerators such as TPUs and GPUs on GKE.

License:Apache-2.0Stargazers:0Issues:0Issues:0
Stargazers:0Issues:0Issues:0