EPFL Machine Learning and Optimization Laboratory's repositories
attention-cnn
Source code for "On the Relationship between Self-Attention and Convolutional Layers"
OptML_course
EPFL Course - Optimization for Machine Learning - CS-439
landmark-attention
Landmark Attention: Random-Access Infinite Context Length for Transformers
collaborative-attention
Code for Multi-Head Attention: Collaborate Instead of Concatenate
error-feedback-SGD
SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847
topology-in-decentralized-learning
Code related to ’Beyond spectral gap: The role of the topology in decentralized learning‘.
easy-summary
difficulty-guided text summarization
phantomedicus
MedSurge: medical survey generator
cifar
MLO internal cifar 10 / 100 default implementation / reference implementation. single machine, variable batch sizes, allowing maybe gradient compression. need to have clear documentation to make it easy to use, and so that we don't loose time with looking for hyperparameters. we can later keep it in sync with mlbench too, but self-contained is even better
epfml-utils
Tools for experimentation and using run:ai. The aim is for these to be small self-contained utilities that are used by multiple people.