AI4LIFE-GROUP's repositories
LLM_Explainer
Code for paper: Are Large Language Models Post Hoc Explainers?
rise-against-distribution-shift
Code base for robust learning for an intersection of causal and adversarial shifts
disagreement-problem
Code repo for the disagreement problem paper
fair-unlearning
Fair Machine Unlearning: Data Removal while Mitigating Disparities
fair_ranking_effectiveness_on_outcomes
AIES 2021 Paper: Does Fair Ranking Imporve Minority Outcomes?
robust-grads
Code for https://arxiv.org/abs/2306.06716
arxiv-latex-cleaner
arXiv LaTeX Cleaner: Easily clean the LaTeX code of your paper to submit to arXiv
average-case-robustness
Characterizing Data Point Vulnerability via Average-Case Robustness, UAI 2024
UAI22_DataPoisoningAttacksonOff-PolicyPolicyEvaluationMethods_RL
DOPE: Data Poisoning Attacks on Off-Policy Policy Evaluation Methods
CounterfactualDistanceAttack
"On the Privacy Risks of Algorithmic Recourse". Martin Pawelczyk, Himabindu Lakkaraju* and Seth Neel*. In International Conference on Artificial Intelligence and Statistics (AISTATS), PMLR, 2023.
In-Context-Unlearning
"In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; arXiv preprint: arXiv:2310.07579; 2023.
ProbabilisticallyRobustRecourse
"Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness". M. Pawelczyk, T. Datta, J. v.d Heuvel, G. Kasneci, H. Lakkaraju. International Conference on Learning Representations 2023 (ICLR).
rocerf_code
Source code for ROCERF