mdzhang's repositories
abbreviate_journal_names_in_bib
replace full journal names in a bibtex database file into official abbreviated names, or reverse.
advertorch
A Toolbox for Adversarial Robustness Research
ANP_backdoor
Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"
cifar10_challenge
A challenge to explore adversarial robustness of neural networks on CIFAR10.
DeepIPR
This is the code repo of our NeurIPS2019 work that proposes novel passport-based DNN ownership verification schemes, i.e. we embed passport layer into various deep learning architectures (e.g. AlexNet, ResNet) for Intellectual Property Right (IPR) protection.
Defending-Neural-Backdoors-via-Generative-Distribution-Modeling
The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749
google-research
Google Research
ISSBA
Invisible Backdoor Attack with Sample-Specific Triggers
MINE-Mutual-Information-Neural-Estimation-
A pytorch implementation of MINE(Mutual Information Neural Estimation)
mine-pytorch
Mutual Information Neural Estimation in Pytorch
model-sanitization
Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at ICLR 2020
Mutual-Information-Variational-Bounds
A Tensorflow implementation Mutual Information estimation methods
Narcissus-backdoor-attack
The official implementation of Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
NKThesis
南开大学硕士毕业论文/博士论文模板 (Latex Template for Nankai University)
spectral-stein-grad
Code for "A Spectral Approach to Gradient Estimation for Implicit Distributions"
spectre-defense
Defending Against Backdoor Attacks Using Robust Covariance Estimation
StableNet
Official repository for CVPR21 paper "Deep Stable Learning for Out-Of-Distribution Generalization".
Universal-Litmus-Patterns
Official Repository for the CVPR 2020 paper "Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs"
VIB-pytorch
Pytorch implementation of Deep Variational Information Bottleneck
VIBI
In-depth experiments for VIBI (Variational Information Bottleneck for Interpretability) for MNIST and CIFAR10 written in Python and PyTorch.
Warping-based_Backdoor_Attack-release
WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)