There are 20 repositories under prompt-injection topic.
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
Advanced Code and Text Manipulation Prompts for Various LLMs. Suitable for Deepseek, GPT o1, Claude, Llama3, Gemini, and other high-performance open-source LLMs.
🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance metrics, & sentiment analysis. 📊 A comprehensive tool for LLM observability. 👀
💼 another CV template for your job application, yet powered by Typst and more
# Prompt Engineering Hub ⭐️ lovable.dev no code builders: https://www.aidevelopers.tech/
Every practical and proposed defense against prompt injection.
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
Self-hardening firewall for large language models
Prompts of GPT-4V & DALL-E3 to full utilize the multi-modal ability. GPT4V Prompts, DALL-E3 Prompts.
This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses
prompt attack-defense, prompt Injection, reverse engineering notes and examples | 提示词对抗、破解例子与笔记
gpt_server是一个用于生产级部署LLMs或Embedding的开源框架。
A benchmark for prompt injection detection systems.
Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks
A prompt injection game to collect data for robust ML research
This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.
My inputs for the LLM Gandalf made by Lakera
Build production ready apps for GPT using Node.js & TypeScript
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using classical ML algorithms, a trained LLM model, and a fine-tuned LLM model.
jailbreakme.xyz is an open-source decentralized app (dApp) where users are challenged to try and jailbreak pre-existing LLMs in order to find weaknesses and be rewarded. 🏆
Whispers in the Machine: Confidentiality in LLM-integrated Systems
Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platform provider.
Short list of indirect prompt injection attacks for OpenAI-based models.
LLM | Security | Operations in one github repo with good links and pictures.
My solutions for Lakera's Gandalf
Manual Prompt Injection / Red Teaming Tool
The Security Toolkit for managing Generative AI(especially LLMs) and Supervised Learning processes(Learning and Inference).
Guard your LangChain applications against prompt injection with Lakera ChainGuard.
🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded session:
MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge, Impossible GPT Security, Prompts Cybersecurity, Prompting Vulnerabilities, FlowGPT, Secure Prompting, Secure LLMs, Prompt Hacker, Cutting-edge Ai Security, Unbreakable GPT Agent, Anti GPT Leak, System Prompt Security.
A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.