Thai Le's repositories

shield-defend-adversarial-texts

Repository of the paper "SHIELD: Defending Textual Neural Networks against Multiple Black-Box Adversarial Attacks with Stochastic Multi-Expert Patcher" accepted to ACL'22

MALCOM

MALCOM: Generating Malicious Comments to Attack Neural Fake News Detection Models 20th IEEE International Conference on Data Mining (ICDM)

perturbations-in-the-wild

Repository of the paper "Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense", ACL22 (Findings)

Language:PythonLicense:MITStargazers:7Issues:2Issues:0

Adversarial_SocialBots_WWW22

Source code for the paper "Socialbots on Fire: Modeling Adversarial Behaviors of Socialbots via Multi-Agent Hierarchical Reinforcement Learning." (Web Conference 2022)

ACL2021-DARCY-HoneypotDefenseNLP

Thai Le, Noseong Park, Dongwon Lee. A Sweet Rabbit Hole by DARCY: Using Honeypots to Detect Universal Trigger’s Adversarial Attacks. 59th Annual Meeting of the Association for Computational Linguistics (ACL) 2021.

Language:PythonStargazers:4Issues:3Issues:0

synthetic_clickbait

[ASONAM 2019] Synthetic Texts (clickbaits) Generation using Different Variations of VAE. Code for paper "5 Sources of Clickbaits You Should Know! Using Synthetic Clickbaits to Improve Prediction and Distinguish between Bot-Generated and Human-Written Headlines"

Language:PythonStargazers:1Issues:3Issues:0

audioset-dl

Download AudioSet for Vision-Audio-Text Pre-training

Language:PythonStargazers:0Issues:1Issues:0

Awesome-explainable-AI

A collection of research materials on explainable AI/ML

License:MITStargazers:0Issues:2Issues:0
License:MITStargazers:0Issues:2Issues:0

CAPS

Implementation of CAPS: Comprehensible Abstract Policy Summaries

Language:PythonStargazers:0Issues:2Issues:0

certified-word-sub

Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Language:PythonLicense:MITStargazers:0Issues:2Issues:0
Stargazers:0Issues:2Issues:0
Language:SCSSLicense:MITStargazers:0Issues:2Issues:0

CSrankings

A web app for ranking computer science departments according to their research output in selective venues, and for finding active faculty across a wide range of areas.

Language:PythonLicense:NOASSERTIONStargazers:0Issues:1Issues:0

DetectGPT-Single

DetectGPT code but for prediction on a single document. All credits go to **https://github.com/eric-mitchell/detect-gpt**

Language:PythonStargazers:0Issues:3Issues:0

facenet-pytorch

Pretrained Pytorch face detection (MTCNN) and facial recognition (InceptionResnet) models

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

graph_sample_rl

Influence maximization in unknown social networks: Learning Policies for Effective Graph Sampling (official code repository)

Language:PythonLicense:MITStargazers:0Issues:2Issues:0
Stargazers:0Issues:3Issues:0

lethaiq.github.io

Personal Website of Thai Le

Language:HTMLStargazers:0Issues:2Issues:0
Language:PythonStargazers:0Issues:2Issues:0

Mutant-X

Code for the Authorship Obfuscation tool called "Mutant-X" presented in PoPETs 2019 (https://petsymposium.org/2019/files/papers/issue4/popets-2019-0058.pdf)

Language:PythonStargazers:0Issues:1Issues:0

Obfuscation-Detection

Obfuscation detection tool. Given a document, it tells if it has been written by human or altered by an automated authorship obfuscation tool.

Language:CSSStargazers:0Issues:1Issues:0

OpenAttack

An Open-Source Package for Textual Adversarial Attack.

Language:PythonLicense:MITStargazers:0Issues:2Issues:0
Language:PythonLicense:MITStargazers:0Issues:2Issues:0
Language:PythonStargazers:0Issues:3Issues:0

sbryngelson.github.io

Bryngelson research group website

Language:TeXStargazers:0Issues:2Issues:0

SysFake-1

A classifier to help users identify false news.

Language:Jupyter NotebookStargazers:0Issues:2Issues:0

TAADpapers

Must-read Papers on Textual Adversarial Attack and Defense

Stargazers:0Issues:2Issues:0

xai-iml-sota

Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.

Language:RStargazers:0Issues:2Issues:0

XAIFooler_EMNLP23

Source code for ``Are Your Explanations Reliable?" Investigating the Stability of LIME in Explaining Text Classifiers by Marrying XAI and Adversarial Attack

Language:PythonLicense:MITStargazers:0Issues:0Issues:0