CyberAlbSecOP / Awesome_GPT_Super_Prompting

ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

jfhdgsjyhfjyrd

What will you find in here:

  • ChatGPT Jailbreaks
  • GPT Assistants Prompt Leaks
  • GPTs Prompt Injection
  • LLM Prompt Security
  • Super Prompts
  • Prompt Hack
  • Prompt Security
  • Ai Prompt Engineering
  • Adversarial Machine Learning

⚠️ When you look into the "Latest Jailbreaks" folder, just check the latest additions for working Jailbreaks. ⚠️


Legend:

  • 🌟: Legendary!
  • 🔥: Hot Stuff

🚨 Jailbreaks

Explore techniques for bypassing restrictions on GPT models.


🕵️‍♂️ GPT Agents System Prompt Leaks

Find leaked prompts and system information from GPT agents.


🛡️ Prompt Injection

Resources focused on exploiting or defending against prompt injections.


🔐 Secure Prompting

Repositories dedicated to securing prompts and mitigating vulnerabilities.


🗂️ GPTs Lists

Collections of various GPT resources and lists.


📚 Prompts Libraries

Explore libraries of GPT prompts for various applications.


🛠️ Prompt Engineering

Resources to master the craft of prompt engineering.


🔎 Prompt Sources

Communities and forums for discovering and sharing prompts.


🧠 Cyber-Albsecop GPT Agents

Specialized GPT agents focused on cybersecurity and more.


To do for V3.00

  • Keep the repo updated and hot
  • Add more personal prompts
  • Add more external sources
  • Add instruction on how to use prompts

Star History

Star History Chart

Keywords:

ChatGPT Assistant Leak, Jailbreak Prompts, GPT Hacking, GPT Agents Hack, System Prompt Leaks, Prompt Injection, LLM Security, Super Prompts, AI Adversarial Prompting, Prompt Design, Secure AI, Prompt Security, Prompt Development, Prompt Collection, GPT Prompt Library, Secret System Prompts, Creative Prompts, Prompt Crafting, Prompt Engineering, Prompt Vulnerability, GPT prompt jailbreak, GPT-3 jailbreak, OpenAI GPT-3 jailbreak, GPT-4 jailbreak, GPT-3 bypass, GPT-3 limitations, GPT-3 restrictions, GPT-3 hack, GPT-3 exploit, GPT-3 security, GPT-3 vulnerabilities, GPT-3 unauthorized access, GPT-3 unauthorized use, GPT-3 unauthorized modifications, GPT-3 restrictions bypass, GPT-3 AI jailbreak, OpenAI jailbreak, GPT-3 security breach, GPT-3 unauthorized alterations, GPT-3 model jailbreak, GPT prompt hack, GPT-3 prompt ideas, GPT-4 prompt examples, AI writing prompts, creative writing prompts, generate text prompts, prompt generator tool, artificial intelligence prompts, text generation prompts, machine learning prompts, writing inspiration prompts, GPT prompt list, advanced prompt techniques, AI content prompts, prompt generation tips, text prompt strategies, GPT writing exercises, prompt brainstorming methods, automated prompt generation, prompt hacking, hacking prompts, prompt hacking tool, prompt generator, writing prompts, creative prompts, prompt ideas, prompt writing, writing inspiration, story prompts, prompt app, prompt challenge, daily prompts, prompt list, prompt examples, writing exercises, prompt website, prompt journal, prompt library, custom prompts, prompt engineer, prompt engineering, prompt software engineer, prompt system engineer, prompt network engineer, prompt support engineer, prompt technical engineer, prompt IT engineer, prompt automation engineer, prompt infrastructure engineer, prompt DevOps engineer, prompt software developer, prompt software development, prompt tech engineer, prompt software architecture, prompt engineer job, prompt engineer role, prompt engineer responsibilities, prompt engineer career, prompt engineer salary.

About

ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.

License:GNU General Public License v3.0