There are 0 repository under prompt-injection-defense topic.
PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.
A comprehensive reference for securing Large Language Models (LLMs). Covers OWASP GenAI Top-10 risks, prompt injection, adversarial attacks, real-world incidents, and practical defenses. Includes catalogs of red-teaming tools, guardrails, and mitigation strategies to help developers, researchers, and security teams deploy AI responsibly.
LangGuard Python Library
🛠️ Explore large language models through hands-on projects and tutorials to enhance your understanding and practical skills in natural language processing.
이모지 스머글링, 이모지 이베이젼 겉 핥기
Bidirectional Security Framework for Human/LLM Interfaces - RC9-FPR4 baseline frozen (ASR 2.76%, Wilson Upper 3.59% GATE PASS, FPR stratified: doc_with_codefence 0.79% Upper GATE PASS, pure_doc 4.69% Upper). RC10.3c development integrated (semantic veto, experimental). Tests: 833/853 (97.7%), MyPy clean, CI GREEN. Shadow deployment ready.
SecureFlow: Production-Ready Multi-Agent Financial Intelligence System
Tool that analyzes how prompt expansion and adversarial system prompts affect safety classification by LLMs.
🛡️ Explore tools for securing Large Language Models, uncovering their strengths and weaknesses in the realm of offensive and defensive security.