There are 1 repository under data-poisoning topic.
A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.
A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them
A curated list of academic events on AI Security & Privacy
[NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
[ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on poisoned dataset.
How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)
Experiments on Data Poisoning Regression Learning
Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning. (Neurips 2021)
CCS'22 Paper: "Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation"
A backdoor attack in a Federated learning setting using the FATE framework
Measure and Boost Backdoor Robustness
Analyzing Adversarial Bias and the Robustness of Fair Machine Learning
Code for the paper Analysis and Detectability of Offline Data Poisoning Attacks on Linear Systems.
[NeurIPS 2022] Can Adversarial Training Be Manipulated By Non-Robust Features?
A repository for the experimental framework for in-stream data poisoning monitoring.