AI4LIFE-GROUP

AI4LIFE-GROUP

Geek Repo

The AI4LIFE group at Harvard is led by Hima Lakkaraju. We study interpretability, fairness, privacy, and reliability of AI and ML models.

Twitter:@ai4life_harvard

Github PK Tool:Github PK Tool

AI4LIFE-GROUP's repositories

OpenXAI

OpenXAI : Towards a Transparent Evaluation of Model Explanations

Language:JavaScriptLicense:MITStargazers:218Issues:6Issues:14

SpLiCE

Sparse Linear Concept Embeddings

Language:PythonLicense:Apache-2.0Stargazers:34Issues:3Issues:3

LLM_Explainer

Code for paper: Are Large Language Models Post Hoc Explainers?

Language:Jupyter NotebookLicense:MITStargazers:20Issues:5Issues:0

rise-against-distribution-shift

Code base for robust learning for an intersection of causal and adversarial shifts

Language:PythonStargazers:3Issues:2Issues:0
Language:Jupyter NotebookStargazers:3Issues:2Issues:0

lfa

Local function approximation (LFA) framework, NeurIPS 2022

Language:PythonStargazers:2Issues:2Issues:0

DiET

Code for "Discriminative Feature Attributions via Distractor Erasure Tuning"

Language:PythonStargazers:1Issues:3Issues:0

disagreement-problem

Code repo for the disagreement problem paper

Language:Jupyter NotebookLicense:MITStargazers:1Issues:3Issues:1

fair-unlearning

Fair Machine Unlearning: Data Removal while Mitigating Disparities

Language:PythonLicense:Apache-2.0Stargazers:1Issues:3Issues:0

fair_ranking_effectiveness_on_outcomes

AIES 2021 Paper: Does Fair Ranking Imporve Minority Outcomes?

Language:Jupyter NotebookStargazers:1Issues:1Issues:0

GraphXAI

GraphXAI: Resource to support the development and evaluation of GNN explainers

Language:PythonLicense:MITStargazers:1Issues:0Issues:0

nifty

Code for paper https://arxiv.org/abs/2102.13186

Language:PythonLicense:MITStargazers:1Issues:0Issues:0

robust-grads

Code for https://arxiv.org/abs/2306.06716

Language:PythonStargazers:1Issues:3Issues:0
Language:PythonLicense:MITStargazers:1Issues:2Issues:0

arxiv-latex-cleaner

arXiv LaTeX Cleaner: Easily clean the LaTeX code of your paper to submit to arXiv

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

average-case-robustness

Characterizing Data Point Vulnerability via Average-Case Robustness, UAI 2024

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

lcnn

Low Curvature Neural Networks (NeurIPS 2022)

Language:PythonStargazers:0Issues:3Issues:0

UAI22_DataPoisoningAttacksonOff-PolicyPolicyEvaluationMethods_RL

DOPE: Data Poisoning Attacks on Off-Policy Policy Evaluation Methods

Language:PythonStargazers:0Issues:1Issues:0
Language:PythonStargazers:0Issues:3Issues:0
Language:Jupyter NotebookStargazers:0Issues:3Issues:0

CounterfactualDistanceAttack

"On the Privacy Risks of Algorithmic Recourse". Martin Pawelczyk, Himabindu Lakkaraju* and Seth Neel*. In International Conference on Artificial Intelligence and Statistics (AISTATS), PMLR, 2023.

Language:Jupyter NotebookStargazers:0Issues:3Issues:0

In-Context-Unlearning

"In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; arXiv preprint: arXiv:2310.07579; 2023.

Language:Jupyter NotebookStargazers:0Issues:3Issues:0

ProbabilisticallyRobustRecourse

"Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness". M. Pawelczyk, T. Datta, J. v.d Heuvel, G. Kasneci, H. Lakkaraju. International Conference on Learning Representations 2023 (ICLR).

Language:PythonLicense:MITStargazers:0Issues:3Issues:0

rocerf_code

Source code for ROCERF

Language:PythonLicense:MITStargazers:0Issues:3Issues:0