Ye Liu's repositories

adaptive_auto_attack

Adversarial Robustness, White-box, Adversarial Attack

Language:PythonLicense:MITStargazers:48Issues:3Issues:0

EWR-PGD

white box adversarial attack

adversarial-robustness-toolbox

Python library for adversarial machine learning, attacks and defences for neural networks, logistic regression, decision trees, SVM, gradient boosted trees, Gaussian processes and more with multiple framework support

Language:Jupyter NotebookLicense:MITStargazers:1Issues:1Issues:0

advertorch

A Toolbox for Adversarial Robustness Research

Language:Jupyter NotebookLicense:LGPL-3.0Stargazers:1Issues:1Issues:0

limited-blackbox-attacks

Code for "Black-box Adversarial Attacks with Limited Queries and Information" (http://arxiv.org/abs/1804.08598)

Language:PythonStargazers:1Issues:1Issues:0

AdvBox

Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:1Issues:0

awesome-collections

Collections of all awesome thing!

License:GPL-3.0Stargazers:0Issues:1Issues:0

Awesome-Noah

:octocat: AI圈Noah plan-AI数据竞赛Top可复现解决方案(Awesome Top Solution List of Excellent AI Competitions)

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

cifar10_challenge

A challenge to explore adversarial robustness of neural networks on CIFAR10.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

convex_adversarial

A method for training neural networks that are provably robust to adversarial attacks.

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

fast_adversarial

Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:1Issues:0

MachineLearning

Basic Machine Learning and Deep Learning

Language:PythonStargazers:0Issues:1Issues:0

mnist_challenge

A challenge to explore adversarial robustness of neural networks on MNIST.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

obfuscated-gradients

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

Language:Jupyter NotebookStargazers:0Issues:1Issues:0

pixel-deflection

Deflecting Adversarial Attacks with Pixel Deflection

Language:Jupyter NotebookStargazers:0Issues:1Issues:0

pretrained-models.pytorch

Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:1Issues:0

tpu

Reference models and tools for Cloud TPUs.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:1Issues:0

vision

Datasets, Transforms and Models specific to Computer Vision

Language:Jupyter NotebookLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0