Anthena Matrix (AnthenaMatrix)

AnthenaMatrix

Geek Repo

Company:Anthena Matrix

Location:Cloud

Home Page:https://anthenamatrix.com

Twitter:@AnthenaMatrix

Github PK Tool:Github PK Tool

Anthena Matrix's repositories

Website-Prompt-Injection

Website Prompt Injection is a concept that allows for the injection of prompts into an AI system via a website's. This technique exploits the interaction between users, websites, and AI systems to execute specific prompts that influence AI behavior.

Language:HTMLLicense:MITStargazers:32Issues:0Issues:0

Prompt-Injection-Testing-Tool

The Prompt Injection Testing Tool is a Python script designed to assess the security of your AI system's prompt handling against a predefined list of user prompts commonly used for injection attacks. This tool utilizes the OpenAI GPT-3.5 model to generate responses to system-user prompt pairs and outputs the results to a CSV file for analysis.

Language:PythonLicense:MITStargazers:19Issues:0Issues:0

Image-Prompt-Injection

Image Prompt Injection is a Python script that demonstrates how to embed a secret prompt within an image using steganography techniques. This hidden prompt can be later extracted by an AI system for analysis, enabling covert communication with AI models through images.

AI-Image-Data-Poisoning

AI Image Data Poisoning is a Python script that demonstrates how to add imperceptible perturbations to images, known as adversarial noise, which can disrupt the training process of AI models.

Language:PythonLicense:MITStargazers:16Issues:0Issues:0

ASCII-Art-Prompt-Injection

ASCII Art Prompt Injection is a novel approach to hacking AI assistants using ASCII art. This project leverages the distracting nature of ASCII art to bypass security measures and inject prompts into large language models, such as GPT-4, leading them to provide unintended or harmful responses.

License:MITStargazers:12Issues:0Issues:0

Many-Shot-Jailbreaking

Research on "Many-Shot Jailbreaking" in Large Language Models (LLMs). It unveils a novel technique capable of bypassing the safety mechanisms of LLMs, including those developed by Anthropic and other leading AI organizations.

License:MITStargazers:12Issues:0Issues:0

AI-Prompt-Injection-List

AI/LLM Prompt Injection List is a curated collection of prompts designed for testing AI or Large Language Models (LLMs) for prompt injection vulnerabilities. This list aims to provide a comprehensive set of prompts that can be used to evaluate the behavior of AI or LLM systems when exposed to different types of inputs.

License:MITStargazers:8Issues:1Issues:0

AI-Vulnerability-Assessment-Framework

The AI Vulnerability Assessment Framework is an open-source checklist designed to guide users through the process of assessing the vulnerability of artificial intelligence (AI) systems to various types of attacks and security threats

License:MITStargazers:8Issues:2Issues:0

The-I-Exemption-Bypassing-LLM-Ethical-Filters

The "I" Exemption, is a curious behavior in some LLMs. We discover how these AI systems might shy away from directly assisting with unethical actions if you ask in the first person ("I"). But with a clever rephrase to a general scenario ("they"), they might spill the beans and explain the unethical method.

License:MITStargazers:8Issues:0Issues:0

AI-Audio-Data-Poisoning

AI Audio Data Poisoning is a Python script that demonstrates how to add adversarial noise to audio data. This technique, known as audio data poisoning, involves injecting imperceptible noise into audio files to manipulate the behavior of AI systems trained on this data.

Language:PythonLicense:MITStargazers:7Issues:0Issues:0

AnthenaMatrix

Config files for my GitHub profile.

Stargazers:6Issues:0Issues:0