Jason Ross's repositories
prompt-injection-datasets
datasets for using/building LLM prompt injection tooling
ai-exploits
A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities
AITMWorker
Proof of concept: using a Cloudflare worker for AITM attacks
awesome-llm-cybersecurity-tools
A curated list of large language model tools for cybersecurity research.
Awesome_GPT_Super_Prompting
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
awful-ai
😈Awful AI is a curated list to track current scary usages of AI - hoping to raise awareness
ComPromptMized
ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications
Damn-Vulnerable-RESTaurant-API-Game
Damn Vulnerable Restaurant is an intentionally vulnerable Web API game for learning and training purposes dedicated to developers, ethical hackers and security engineers.
DSPy-blog
A tutorial on DSPy and whether automated prompt engineering lives up to the hype
EasyJailbreak
An easy-to-use Python framework to generate adversarial jailbreak prompts.
evidently
Evaluate and monitor ML models from validation to production. Join our Discord: https://discord.com/invite/xZjKRaNp8b
HarmBench
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
intro-to-intelligent-apps
This repository introduces and helps organizations get started with building Intelligent Apps and incorporating Large Language Models (LLMs) via AI Orchestration into them.
llm-answer-engine
Build a Perplexity-Inspired Answer Engine Using Next.js, Groq, Mixtral, Langchain, OpenAI, Brave & Serper
llm-vulnerable-recruitment-app
An example vulnerable app that integrates an LLM
LLM101n
LLM101n: Let's build a Storyteller
Monocle
Tooling backed by an LLM for performing natural language searches against compiled target binaries. Search for encryption code, password strings, vulnerabilities, etc.
pint-benchmark
A benchmark for prompt injection detection systems.
prompt-injectinator
tooling to help create prompt injection tests for generative ai models and apps that consume their content
prompt-injection-defenses
Every practical and proposed defense against prompt injection.
ps-fuzz
Make your GenAI Apps Safe & Secure :rocket: Test & harden your system prompt
responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
textgrad
Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.
www-project-top-10-for-large-language-model-applications
OWASP Foundation Web Respository
z-js
The literally low overhead js framework!