Giuseppe Massaro (N3mes1s)

N3mes1s

Geek Repo

Company:https://github.com/ReaQta

Location:Amsterdam

Home Page:https://twitter.com/#!/gn3mes1s

Github PK Tool:Github PK Tool


Organizations
ReaQta

Giuseppe Massaro's starred repositories

grok-1

Grok open release

Language:PythonLicense:Apache-2.0Stargazers:49141Issues:560Issues:200

SWE-agent

SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It solves 12.47% of bugs in the SWE-bench evaluation set and takes just 1 minute to run.

Language:PythonLicense:MITStargazers:11862Issues:88Issues:309

reader

Convert any URL to an LLM-friendly input with a simple prefix https://r.jina.ai/

Language:TypeScriptLicense:Apache-2.0Stargazers:5580Issues:32Issues:71

xzbot

notes, honeypot, and exploit demo for the xz backdoor (CVE-2024-3094)

Language:GoStargazers:3468Issues:39Issues:0

cozo

A transactional, relational-graph-vector database that uses Datalog for query. The hippocampus for AI!

Language:RustLicense:MPL-2.0Stargazers:3234Issues:42Issues:138

tracecat

The open source Tines / Splunk SOAR alternative.

Language:PythonLicense:AGPL-3.0Stargazers:2203Issues:21Issues:70

PurpleLlama

Set of tools to assess and improve LLM security.

Language:PythonLicense:NOASSERTIONStargazers:2132Issues:32Issues:20

jailbreak_llms

[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).

Language:Jupyter NotebookLicense:MITStargazers:1590Issues:21Issues:7

akto

Proactive, Open source API security → API discovery, Testing in CI/CD, Test Library with 150+ Tests, Add custom tests, Sensitive data exposure

Language:JavaLicense:MITStargazers:881Issues:14Issues:146

awesome-llm-security

A curation of awesome tools, documents and projects about LLM Security.

TheBigPromptLibrary

A collection of prompts, system prompts and LLM instructions

Language:HTMLLicense:MITStargazers:658Issues:19Issues:0

vec2text

utilities for decoding deep representations (like sentence embeddings) back to text

Language:PythonLicense:NOASSERTIONStargazers:642Issues:13Issues:39

CipherChat

A framework to evaluate the generalization capability of safety alignment for LLMs

Language:PythonLicense:MITStargazers:540Issues:8Issues:0

EasyJailbreak

An easy-to-use Python framework to generate adversarial jailbreak prompts.

Language:PythonLicense:GPL-3.0Stargazers:313Issues:5Issues:21

ps-fuzz

Make your GenAI Apps Safe & Secure :rocket: Test & harden your system prompt

Language:PythonLicense:MITStargazers:300Issues:10Issues:14

PromptInject

PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks. 🏆 Best Paper Awards @ NeurIPS ML Safety Workshop 2022

Language:PythonLicense:MITStargazers:276Issues:10Issues:2

llm-security

Dropbox LLM Security research code and results

Language:PythonLicense:Apache-2.0Stargazers:190Issues:6Issues:0

rigging

Lightweight LLM Interaction Framework

Language:PythonLicense:MITStargazers:150Issues:7Issues:6

llm-adaptive-attacks

Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]

Language:ShellLicense:MITStargazers:137Issues:3Issues:4

jailbreakbench

An Open Robustness Benchmark for Jailbreaking Language Models [arXiv 2024]

Language:PythonLicense:MITStargazers:110Issues:4Issues:3

dspy-redteam

Red-Teaming Language Models with DSPy

parley

Tree of Attacks (TAP) Jailbreaking Implementation

Language:PythonLicense:MITStargazers:80Issues:4Issues:0

ReNeLLM

The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily".

Language:PythonLicense:MITStargazers:52Issues:11Issues:3

curiosity_redteam

Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizXgXU)

Language:Jupyter NotebookLicense:MITStargazers:47Issues:5Issues:2

llm-misinformation

The dataset and code for the paper "Can LLM-Generated Misinformation Be Detected?"

PrivacyBackdoor

Privacy backdoors

Language:PythonStargazers:23Issues:0Issues:0
Language:PythonLicense:MITStargazers:11Issues:0Issues:0