Fir Li's starred repositories
Transform2Act
[ICLR 2022 Oral] Official PyTorch Implementation of "Transform2Act: Learning a Transform-and-Control Policy for Efficient Agent Design".
eth-cs-notes
Lecture notes and cheatsheets for Master's in Computer Science at ETH Zurich
graph-ood-detection
A curated list of resources for OOD detection with graph data.
Unleashing-Mask
[ICML 2023] "Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability"
KDD22-OODGAT
This is the implementation of OODGAT from KDD'22: Learning on Graphs with Out-of-Distribution Nodes.
ttt_cifar_release
TTT Code Release
Awesome-model-inversion-attack
A curated list of resources for model inversion attack (MIA).
Lottery-Ticket-Hypothesis-in-Pytorch
This repository contains a Pytorch implementation of the paper "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks" by Jonathan Frankle and Michael Carbin that can be easily adapted to any model/dataset.
OOD-detection-using-OECC
Outlier Exposure with Confidence Control for Out-of-Distribution Detection
informative-outlier-mining
We propose a theoretically motivated method, Adversarial Training with informative Outlier Mining (ATOM), which improves the robustness of OOD detection to various types of adversarial OOD inputs and establishes state-of-the-art performance.
logitnorm_ood
Official code for ICML 2022: Mitigating Neural Network Overconfidence with Logit Normalization
error-detection
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
Awesome-Pruning
A curated list of neural network pruning resources.
gradnorm_ood
On the Importance of Gradients for Detecting Distributional Shifts in the Wild
ttt_imagenet_release
TTT Code Release
Geometry-aware-Instance-reweighted-Adversarial-Training
the paper "Geometry-aware Instance-reweighted Adversarial Training" ICLR 2021 oral
semisup-adv
Semisupervised learning for adversarial robustness https://arxiv.org/pdf/1905.13736.pdf
pre-training
Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)
auto-attack
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"