There are 19 repositories under ai-security topic.
This repository is primarily maintained by Omar Santos (@santosomar) and includes thousands of resources related to ethical hacking, bug bounties, digital forensics and incident response (DFIR), artificial intelligence security, vulnerability research, exploit development, reverse engineering, and more.
🐢 Open-Source Evaluation & Testing for ML & LLM systems
A curated list of useful resources that cover Offensive AI.
A list of backdoor learning resources
A curated list of academic events on AI Security & Privacy
[CCS'24] SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models
Train AI (Keras + Tensorflow) to defend apps with Django REST Framework + Celery + Swagger + JWT - deploys to Kubernetes and OpenShift Container Platform
Performing website vulnerability scanning using OpenAI technologie
ATLAS tactics, techniques, and case studies data
Cyber-Security Bible! Theory and Tools, Kali Linux, Penetration Testing, Bug Bounty, CTFs, Malware Analysis, Cryptography, Secure Programming, Web App Security, Cloud Security, Devsecops, Ethical Hacking, Social Engineering, Privacy, Incident Response, Threat Assestment, Personal Security, Ai Security, Android Security, Iot Security, Standards.
pytorch implementation of Parametric Noise Injection for adversarial defense
[NDSS'24] Inaudible Adversarial Perturbation: Manipulating the Recognition of User Speech in Real Time
Website Prompt Injection is a concept that allows for the injection of prompts into an AI system via a website's. This technique exploits the interaction between users, websites, and AI systems to execute specific prompts that influence AI behavior.
This repository provide the studies on the security of language models for code (CodeLMs).
Learning to Identify Critical States for Reinforcement Learning from Videos (Accepted to ICCV'23)
The Prompt Injection Testing Tool is a Python script designed to assess the security of your AI system's prompt handling against a predefined list of user prompts commonly used for injection attacks. This tool utilizes the OpenAI GPT-3.5 model to generate responses to system-user prompt pairs and outputs the results to a CSV file for analysis.
Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
Python library for Modzy Machine Learning Operations (MLOps) Platform
AI/LLM Prompt Injection List is a curated collection of prompts designed for testing AI or Large Language Models (LLMs) for prompt injection vulnerabilities. This list aims to provide a comprehensive set of prompts that can be used to evaluate the behavior of AI or LLM systems when exposed to different types of inputs.
Image Prompt Injection is a Python script that demonstrates how to embed a secret prompt within an image using steganography techniques. This hidden prompt can be later extracted by an AI system for analysis, enabling covert communication with AI models through images.
Network exploit detection using highly accurate pre-trained deep neural networks with Celery + Keras + Tensorflow + Redis
Building Private Healthcare AI Assistant for Clinics Using Qdrant Hybrid Cloud, DSPy and Groq - Llama3
Do you want to learn AI Security but don't know where to start ? Take a look at this map.
The official JavaScript SDK for the Modzy Machine Learning Operations (MLOps) Platform.
Evaluation & testing framework for computer vision models
A curated collection of the latest academic research papers and developments in AI Security. This repository aims to provide a comprehensive source for researchers and enthusiasts to stay updated on AI Security trends and findings. Contributions welcome!
A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.
Datasets for training deep neural networks to defend software applications