sqsunexeter's starred repositories

iTerm2-Color-Schemes

Over 250 terminal color schemes/themes for iTerm/iTerm2. Includes ports to Terminal, Konsole, PuTTY, Xresources, XRDB, Remmina, Termite, XFCE, Tilda, FreeBSD VT, Terminator, Kitty, MobaXterm, LXTerminal, Microsoft's Windows Terminal, Visual Studio, Alacritty

Language:ShellLicense:NOASSERTIONStargazers:24695Issues:342Issues:108

nndl.github.io

《神经网络与深度学习》 邱锡鹏著 Neural Network and Deep Learning

tensor2tensor

Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.

Language:PythonLicense:Apache-2.0Stargazers:15398Issues:465Issues:1247

ai-deadlines

:alarm_clock: AI conference deadline countdowns

Language:JavaScriptLicense:MITStargazers:5623Issues:100Issues:93

adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Language:PythonLicense:MITStargazers:4800Issues:98Issues:883

Conference-Acceptance-Rate

Acceptance rates for the major AI conferences

Language:Jupyter NotebookLicense:MITStargazers:4176Issues:129Issues:28

PromptPapers

Must-read papers on prompt-based tuning for pre-trained language models.

NLP-Interview-Notes

该仓库主要记录 NLP 算法工程师相关的面试题

backdoor-learning-resources

A list of backdoor learning resources

awesome-rl-for-cybersecurity

A curated list of resources dedicated to reinforcement learning applied to cyber security.

PromptKG

PromptKG Family: a Gallery of Prompt Learning & KG-related research works, toolkits, and paper-list.

Language:PythonLicense:MITStargazers:687Issues:12Issues:52

OpenAttack

An Open-Source Package for Textual Adversarial Attack.

Language:PythonLicense:MITStargazers:682Issues:18Issues:78

robustlearn

Robust machine learning for responsible AI

Language:PythonLicense:MITStargazers:452Issues:9Issues:20

backdoors101

Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.

Language:PythonLicense:MITStargazers:333Issues:7Issues:24

eran

ETH Robustness Analyzer for Deep Neural Networks

Language:PythonLicense:Apache-2.0Stargazers:313Issues:21Issues:97

universal-triggers

Universal Adversarial Triggers for Attacking and Analyzing NLP (EMNLP 2019)

Language:PythonLicense:MITStargazers:292Issues:9Issues:21

auto_LiRPA

auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs

Language:PythonLicense:NOASSERTIONStargazers:284Issues:8Issues:80

Awesome-Backdoor-in-Deep-Learning

A curated list of papers & resources on backdoor attacks and defenses in deep learning.

Language:PythonLicense:GPL-3.0Stargazers:167Issues:9Issues:1

backdoor-toolbox

A compact toolbox for backdoor attacks and defenses.

DART

[ICLR 2022] Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners

Language:PythonLicense:MITStargazers:128Issues:6Issues:9

SememePSO-Attack

Code and data of the ACL 2020 paper "Word-level Textual Adversarial Attacking as Combinatorial Optimization"

Language:PythonLicense:MITStargazers:86Issues:9Issues:6

hard-label-attack

Natural Language Attacks in a Hard Label Black Box Setting.

BkdAtk-LWS

Code and data of the ACL 2021 paper "Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution"

Language:PythonLicense:MITStargazers:16Issues:8Issues:3

Universal_Pert_Cert

This repo is the official implementation of the ICLR'23 paper "Towards Robustness Certification Against Universal Perturbations." We calculate the certified robustness against universal perturbations (UAP/ Backdoor) given a trained model.

Language:PythonLicense:MITStargazers:12Issues:3Issues:1

TextAttack

TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP

Language:PythonLicense:MITStargazers:9Issues:0Issues:0

infersent-train-2021

contains files and scripts for training InferSent algorithm

Language:Jupyter NotebookStargazers:2Issues:2Issues:0

TextVerifer

Towards Local Robustness Verification for Textual Classifiers with Certifiable Guarantees in Hamming Space - ACL 2023

TAADpapers

Must-read Papers on Textual Adversarial Attack and Defense

Stargazers:1Issues:0Issues:0

NLP-progress

Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.

License:MITStargazers:1Issues:2Issues:0