Theory of Machine Learning, EPFL's repositories
llm-adaptive-attacks
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]
understanding-fast-adv-training
Understanding and Improving Fast Adversarial Training [NeurIPS 2020]
sharpness-vs-generalization
A modern look at the relationship between sharpness and generalization [ICML 2023]
why-weight-decay
Why Do We Need Weight Decay in Modern Deep Learning? [arXiv, Oct 2023]
understanding-sam
Towards Understanding Sharpness-Aware Minimization [ICML 2022]
sgd-sparse-features
SGD with large step sizes learns sparse features [ICML 2023]
adv-training-corruptions
On the effectiveness of adversarial training against common corruptions [UAI 2022]
sam-low-rank-features
Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]
icl-alignment
Is In-Context Learning Sufficient for Instruction Following in LLMs?
long-is-more-for-alignment
Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]
tml-epfl.github.io
Creating a repository to store all related information for the weekly TML group meetings.