sqsunexeter's starred repositories
iTerm2-Color-Schemes
Over 250 terminal color schemes/themes for iTerm/iTerm2. Includes ports to Terminal, Konsole, PuTTY, Xresources, XRDB, Remmina, Termite, XFCE, Tilda, FreeBSD VT, Terminator, Kitty, MobaXterm, LXTerminal, Microsoft's Windows Terminal, Visual Studio, Alacritty
nndl.github.io
《神经网络与深度学习》 邱锡鹏著 Neural Network and Deep Learning
tensor2tensor
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
ai-deadlines
:alarm_clock: AI conference deadline countdowns
adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Conference-Acceptance-Rate
Acceptance rates for the major AI conferences
PromptPapers
Must-read papers on prompt-based tuning for pre-trained language models.
NLP-Interview-Notes
该仓库主要记录 NLP 算法工程师相关的面试题
backdoor-learning-resources
A list of backdoor learning resources
awesome-rl-for-cybersecurity
A curated list of resources dedicated to reinforcement learning applied to cyber security.
OpenAttack
An Open-Source Package for Textual Adversarial Attack.
robustlearn
Robust machine learning for responsible AI
backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
universal-triggers
Universal Adversarial Triggers for Attacking and Analyzing NLP (EMNLP 2019)
auto_LiRPA
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
Awesome-Backdoor-in-Deep-Learning
A curated list of papers & resources on backdoor attacks and defenses in deep learning.
backdoor-toolbox
A compact toolbox for backdoor attacks and defenses.
SememePSO-Attack
Code and data of the ACL 2020 paper "Word-level Textual Adversarial Attacking as Combinatorial Optimization"
hard-label-attack
Natural Language Attacks in a Hard Label Black Box Setting.
BkdAtk-LWS
Code and data of the ACL 2021 paper "Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution"
Universal_Pert_Cert
This repo is the official implementation of the ICLR'23 paper "Towards Robustness Certification Against Universal Perturbations." We calculate the certified robustness against universal perturbations (UAP/ Backdoor) given a trained model.
TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP
infersent-train-2021
contains files and scripts for training InferSent algorithm
TextVerifer
Towards Local Robustness Verification for Textual Classifiers with Certifiable Guarantees in Hamming Space - ACL 2023
TAADpapers
Must-read Papers on Textual Adversarial Attack and Defense
NLP-progress
Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.