Adam Noack's repositories
interp_regularization
A novel neural network gradient regularization scheme for adversarial robustness and interpretability.
a1noack.github.io
Code for my website.
algo_selection
Custom OpenAI gym environment and and agents.
julia_cuda_test
Testing the speedup of CUDA.jl for multiplying matrices.
react-cluster
Code for detecting unseen attacks
3H
Code from C / C++ seminar project.
cuda_neural_net
An implementation of a neural network using cuda kernels in C++ and a non-vectorized baseline neural network using only standard libraries.
nn_and_svm_from_scratch
Comparing results obtained on classification task between sklearn's MLPClassifier and SVC models and two-layer DNN and a SVM I wrote from scratch.
pokec_predict_age
Use natural language data from a user (of the social media website Pokec) and aggregated data describing the user's friends to predict the user's age.
summarization_robustness
We demonstrate that near SOTA summarization models are fairly robust to different types of transformations and goal-directed perturbations.
bert_score
BERT score for text generation
cv_class
Homework for Computer Vision class.
dark_themed_chrome_file_browser_extension
Code for chrome extension that makes the default Chrome file browser better looking.
eeg_data_analysis
Scripts for preprocessing and analyzing the collected EEG data.
eeg_data_collection_scripts
Scripts used to collect EEG waves from study participants.
fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
google-research
Google Research
intro_ai
Homework files and slides from Intro to AI.
labpool
Code for the Labpool site.
Parametric-t-SNE
Running parametric t-SNE by Laurens Van Der Maaten with Octave and oct2py.
pytorch_resnet_cifar10
Proper implementation of ResNet-s for CIFAR10/100 in pytorch that matches description of the original paper.
robust_interpretations
Work showing how enforcing interpretable gradients increases network robustness. Also included is work on robustness to adversarial attacks on salience maps from Ghorbani et al. (https://www.aaai.org/ojs/index.php/AAAI/article/view/4252).
TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP