c0de3's repositories

ai-exploits

A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities

License:NOASSERTIONStargazers:0Issues:0Issues:0

AutoPoison

The official repository of the paper "On the Exploitability of Instruction Tuning".

License:Apache-2.0Stargazers:0Issues:0Issues:0

awesome-ml-privacy-attacks

An awesome list of papers on privacy attacks against machine learning

Stargazers:0Issues:0Issues:0

CAA

Steering Llama 2 with Contrastive Activation Addition

License:MITStargazers:0Issues:0Issues:0

caldera

Automated Adversary Emulation Platform

License:Apache-2.0Stargazers:0Issues:0Issues:0

carving

Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives

License:MITStargazers:0Issues:0Issues:0
License:MITStargazers:0Issues:0Issues:0

CVE-2024-29943

A Pwn2Own SpiderMonkey JIT Bug: From Integer Range Inconsistency to Bound Check Elimination then RCE

Stargazers:0Issues:0Issues:0

Data-governance

This project is an open source AI data governance framework designed to assist organizations in managing and maintaining their data assets to ensure data quality, consistency, and security.

License:Apache-2.0Stargazers:0Issues:0Issues:0

Deepfake_detection_using_deep_learning

This projects aims in detection of video deepfakes using deep learning techniques like RestNext and LSTM. We have achived deepfake detection by using transfer learning where the pretrained RestNext CNN is used to obtain a feature vector, further the LSTM layer is trained using the features. For more details follow the documentaion.

License:GPL-3.0Stargazers:0Issues:0Issues:0

DeepSeek-Coder

DeepSeek Coder: Let the Code Write Itself

License:MITStargazers:0Issues:0Issues:0

ExtractGPT

Attribute Value Extraction using Large Language Models

License:Apache-2.0Stargazers:0Issues:0Issues:0

LabASPIRE

Contains several related topics about the area under research.

Stargazers:0Issues:0Issues:0

LLaMA2-Accessory

An Open-source Toolkit for LLM Development

License:NOASSERTIONStargazers:0Issues:0Issues:0

llama3-jailbreak

A trivial programmatic Llama 3 jailbreak. Sorry Zuck!

Stargazers:0Issues:0Issues:0

llm-adaptive-attacks

Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]

License:MITStargazers:0Issues:0Issues:0

LLM4Decompile

Reverse Engineering: Decompiling Binary Code with Large Language Models

License:MITStargazers:0Issues:0Issues:0

LLMFuzzer

🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. 🚀💥

License:MITStargazers:0Issues:0Issues:0

Local-LLM-Server

quick way to build a private large language model server and provide OpenAI-compatible interfaces | 快速搭建私有大语言模型(LLM)服务,提供OpenAI兼容接口

License:Apache-2.0Stargazers:0Issues:0Issues:0

ollama

Get up and running with Llama 2, Mistral, Gemma, and other large language models.

License:MITStargazers:0Issues:0Issues:0

pal

PAL: Proxy-Guided Black-Box Attack on Large Language Models

License:MITStargazers:0Issues:0Issues:0

PoisonPrompt

Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo:http://124.220.228.133:11003

License:MITStargazers:0Issues:0Issues:0

PromptFuzz

PromtFuzz is an automated tool that generates high-quality fuzz drivers for libraries via a fuzz loop constructed on mutating LLMs' prompts.

Stargazers:0Issues:0Issues:0

ps-fuzz

Make your GenAI Apps Safe & Secure :rocket: Test & harden your system prompt

License:MITStargazers:0Issues:0Issues:0

PyRIT-Redteam

The Python Risk Identification Tool for generative AI (PyRIT) is an open access automation framework to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.

License:MITStargazers:0Issues:0Issues:0

RedPajama-Data

The RedPajama-Data repository contains code for preparing large datasets for training large language models.

License:Apache-2.0Stargazers:0Issues:0Issues:0

ReNeLLM

The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily".

License:MITStargazers:0Issues:0Issues:0

semantic-kernel

Integrate cutting-edge LLM technology quickly and easily into your apps

License:MITStargazers:0Issues:0Issues:0
License:MITStargazers:0Issues:0Issues:0

xTuring

Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6

License:Apache-2.0Stargazers:0Issues:0Issues:0