There are 0 repository under robustness-experiments topic.
This github repository contains the official code for the paper, "Evolving Robust Neural Architectures to Defend from Adversarial Attacks"
Contains experimentation notebooks for my Keras Example "Consistency Training with Supervision".
NumPyNMF implements nine different Non-negative Matrix Factorization (NMF) algorithms using NumPy library and compares the robustness of each algorithm to five various types of noise in real-world data applications.
This github repository contains the official code for the papers, "Robustness Assessment for Adversarial Machine Learning: Problems, Solutions and a Survey of Current Neural Networks and Defenses" and "One Pixel Attack for Fooling Deep Neural Networks"
Comparison of DBNs and FFNN, stressing on understanding how DBNs work and how robust they are against noise and adversarial attacks.