badarahmed / awesome-MLSecOps

A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Awesome Maintenance GitHub GitHub

Awesome MLSecOps

A curated list of awesome open-source tools, resources, and tutorials for MLSecOps (Machine Learning Security Operations).

Table of Contents

Open Source Security Tools

  • Garak - LLM vulnerability scanner.
  • Adversarial Robustness Toolbox - A library of defense methods for machine learning models against adversarial attacks.
  • MLSploit - MLsploit is a cloud framework for interactive experimentation with adversarial machine learning research.
  • TensorFlow Privacy - A library of privacy-preserving machine learning algorithms and tools.
  • Foolbox - A Python toolbox for creating and evaluating adversarial attacks and defenses.
  • Advertorch - A Python toolbox for adversarial robustness research.
  • Artificial Intelligence Threat Matrix - A framework for identifying and mitigating threats to machine learning systems.
  • Adversarial ML Threat Matrix - Adversarial Threat Landscape for AI Systems.
  • CleverHans - A library of adversarial examples and defenses for machine learning models.
  • AdvBox - Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow.
  • Audit AI - Bias Testing for Generalized Machine Learning Applications.
  • Deep Pwning - Deep-pwning is a lightweight framework for experimenting with machine learning models with the goal of evaluating their robustness against a motivated adversary.
  • Privacy Meter - An open-source library to audit data privacy in statistical and machine learning algorithms.
  • TensorFlow Model Analysis - A library for analyzing, validating, and monitoring machine learning models in production.
  • PromptInject - A framework that assembles adversarial prompts.
  • TextAttack - TextAttack is a Python framework for adversarial attacks, data augmentation, and model training in NLP.
  • OpenAttack - An Open-Source Package for Textual Adversarial Attack.
  • TextFooler - A Model for Natural Language Attack on Text Classification and Inference.
  • Flawed Machine Learning Security - Practical examples of "Flawed Machine Learning Security" together with ML Security best practice across the end to end stages of the machine learning model lifecycle from training, to packaging, to deployment.
  • Adversarial Machine Learning CTF - This repository is a CTF challenge, showing a security flaw in most (all?) common artificial neural networks. They are vulnerable for adversarial images.
  • Damn Vulnerable LLM Project - A Large Language Model designed for getting hacked
  • Gandalf Lakera - Prompt Injection CTF playground

Attack Vectors

Blogs and Publications

Community Resources

Contributions

All contributions to this list are welcome!

About

A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.

License:MIT License