Jason Ross (rossja)

rossja

Geek Repo

Location:Rochester, NY

Home Page:http://jasonross.info

Twitter:@rossja

Github PK Tool:Github PK Tool

Jason Ross's repositories

prompt-injection-datasets

datasets for using/building LLM prompt injection tooling

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0

ai-exploits

A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities

Language:PythonLicense:NOASSERTIONStargazers:0Issues:1Issues:0

AITMWorker

Proof of concept: using a Cloudflare worker for AITM attacks

License:MITStargazers:0Issues:0Issues:0

awesome-llm-cybersecurity-tools

A curated list of large language model tools for cybersecurity research.

Stargazers:0Issues:0Issues:0

Awesome_GPT_Super_Prompting

ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.

License:GPL-3.0Stargazers:0Issues:0Issues:0

awful-ai

😈Awful AI is a curated list to track current scary usages of AI - hoping to raise awareness

Stargazers:0Issues:0Issues:0

ComPromptMized

ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications

Language:PythonStargazers:0Issues:1Issues:0

Damn-Vulnerable-RESTaurant-API-Game

Damn Vulnerable Restaurant is an intentionally vulnerable Web API game for learning and training purposes dedicated to developers, ethical hackers and security engineers.

License:GPL-3.0Stargazers:0Issues:0Issues:0

dotfiles

configs and such

Language:ShellStargazers:0Issues:3Issues:0

DSPy-blog

A tutorial on DSPy and whether automated prompt engineering lives up to the hype

Stargazers:0Issues:0Issues:0

EasyJailbreak

An easy-to-use Python framework to generate adversarial jailbreak prompts.

Language:PythonLicense:GPL-3.0Stargazers:0Issues:1Issues:0

evidently

Evaluate and monitor ML models from validation to production. Join our Discord: https://discord.com/invite/xZjKRaNp8b

License:Apache-2.0Stargazers:0Issues:0Issues:0

garak

LLM vulnerability scanner

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

HarmBench

HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

interpret

Fit interpretable models. Explain blackbox machine learning.

Language:C++License:MITStargazers:0Issues:1Issues:0

intro-to-intelligent-apps

This repository introduces and helps organizations get started with building Intelligent Apps and incorporating Large Language Models (LLMs) via AI Orchestration into them.

Language:Jupyter NotebookLicense:MITStargazers:0Issues:1Issues:0

llm-answer-engine

Build a Perplexity-Inspired Answer Engine Using Next.js, Groq, Mixtral, Langchain, OpenAI, Brave & Serper

Language:TypeScriptStargazers:0Issues:1Issues:0

llm-vulnerable-recruitment-app

An example vulnerable app that integrates an LLM

License:Apache-2.0Stargazers:0Issues:0Issues:0

LLM101n

LLM101n: Let's build a Storyteller

Stargazers:0Issues:0Issues:0

Monocle

Tooling backed by an LLM for performing natural language searches against compiled target binaries. Search for encryption code, password strings, vulnerabilities, etc.

License:GPL-3.0Stargazers:0Issues:0Issues:0

paperlib

An open-source academic paper management tool.

Language:TypeScriptLicense:GPL-3.0Stargazers:0Issues:1Issues:0

pint-benchmark

A benchmark for prompt injection detection systems.

License:MITStargazers:0Issues:0Issues:0

prompt-injectinator

tooling to help create prompt injection tests for generative ai models and apps that consume their content

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

prompt-injection-defenses

Every practical and proposed defense against prompt injection.

Stargazers:0Issues:0Issues:0

ps-fuzz

Make your GenAI Apps Safe & Secure :rocket: Test & harden your system prompt

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

PyRIT

The Python Risk Identification Tool for generative AI (PyRIT) is an open access automation framework to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

responsible-ai-toolbox

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

Language:TypeScriptLicense:MITStargazers:0Issues:1Issues:0

textgrad

Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
Language:TeXLicense:NOASSERTIONStargazers:0Issues:1Issues:0

z-js

The literally low overhead js framework!

License:MITStargazers:0Issues:0Issues:0