milton's repositories
microhaskell
Small autodiff lib and a simple working feedforward neural net in Haskell on top of it, from scratch, zero-deps.
GPT-Haskell
A pure Haskell implementation of a decoder-only transformer (GPT)
Double-Descent-Deep-Nets
Double descent experiments/repros on classical ML models and deep neural nets
Einstein-Riddle-Z3
Solving Einstein's Fish riddle with the Z3 theorem prover.
SLIC-Superpixels
SLIC (Simple Linear Iterative Clustering) Superpixels for pixel clustering and segmentation
Adam-Optimizer
Adam optimizer in Haskell using the StateT monad transformer.
llm.c
LLM training in simple, raw C/CUDA
Non-Local-Image-Dehazing
Non-Local Image Dehazing using haze-lines
sophia-jax
JAX implementation of 'Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training'
analisis_matematico
Repositorio para la materia Análisis Matemático para Inteligencia Artificial (CEIA-FIUBA)
Baby-Diffusion-Model
Denoising Diffusion Probabilistic Models (toy version)
Bend
A massively parallel, high-level programming language
CUDA-Transformer
Simple transformer example with an attention CUDA kernel
cytological-image-thresholding
Minimal impl of the paper `Adaptive Local Thresholding for Detection of Nuclei in Diversely Stained Cytology Images`
diffusers
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Faster-KANs
Fast Kolmogorov-Arnold Networks (KANs)
gemma
Open weights LLM from Google DeepMind.
intro_ia
Material de clases para Introducción a la Inteligencia Artificial (CEIA - FIUBA)
llama.cpp
LLM inference in C/C++
Mixtures-of-Local-Experts
PyTorch version of the original 'Adaptive Mixtures of Local Experts' applied to MNIST
tinygrad
You like pytorch? You like micrograd? You love tinygrad! ❤️
torchtune
A Native-PyTorch Library for LLM Fine-tuning
vector-quantize-pytorch
Vector (and Scalar) Quantization, in Pytorch