Ryan Spring's repositories
lightning-thunder
Source to source compiler for PyTorch. It makes PyTorch programs faster on single accelerators and distributed.
NvFuser
A Fusion Code Generator for NVIDIA GPUs
AITemplate
AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
vector-search-class-notes
Class notes for the course "Long Term Memory in AI - Vector Search and Databases" COS 495 @ Princeton Fall 2023
Auto-GPT
An experimental open-source attempt to make GPT-4 fully autonomous.
twitter-algorithm-ml
Source code for Twitter's Recommendation Algorithm
nvprims-torchdynamo
A Python-level JIT compiler designed to make unmodified PyTorch programs faster.
Autopilot-TensorFlow
A TensorFlow implementation of this Nvidia paper: https://arxiv.org/pdf/1604.07316.pdf with some changes
tutel
Tutel MoE: An Optimized Mixture-of-Experts Implementation
micrograd
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
minGPT
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
Optimizing-SGEMM-on-NVIDIA-Turing-GPUs
Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.
LSH_DeepLearning
Scalable and Sustainable Deep Learning via Randomized Hashing
cuda-training-series
Training materials associated with NVIDIA's CUDA Training Series (www.olcf.ornl.gov/cuda-training-series/)
xla
Enabling PyTorch on Google TPU
Optimizing-DGEMM-on-Intel-CPUs-with-AVX512F
Stepwise optimizations of DGEMM on CPU, reaching performance faster than Intel MKL eventually, even under multithreading.
mongoose
A Learnable LSH Framework for Efficient NN Training
Optimizing-DGEMV-on-Intel-CPUs
Highly optimized DGEMV on CPU with both serial and parallel performance better than MKL and OpenBLAS.
cs231n
Solutions to Stanford CS231n Spring 2018 Course Assignments.
Count-Sketch-Optimizers
A compressed adaptive optimizer for training large-scale deep learning models using PyTorch
PyTorch_GBW_LM
PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset
LSH-Mutual-Information
Use LSH Sampling for Mutual Information Estimation
atari-representation-learning
Code for "Unsupervised State Representation Learning in Atari"
comp450-Reachability-Guided-RRT
Use dynamic constraints to sample plausible states for RRT algorithm, improving robot motion planning
comp450-planning_under_uncertainty
Motion planning for a steerable needle under action uncertainty