FudanNLPLAB / EasyJailbreak

An easy-to-use Python framework to generate adversarial jailbreak prompts.

Home Page:https://easyjailbreak.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

EasyJailbreak Logo

—— An easy-to-use Python framework to generate adversarial jailbreak prompts by assembling different methods

EasyJailbreak Documentation

Website License GitHub release (latest by date)

Table of Contents

About

Introduction

What is EasyJailbreak?

EasyJailbreak is an easy-to-use Python framework designed for researchers and developers focusing on LLM security. Specifically, EasyJailbreak decomposes the mainstream jailbreaking process into several iterable steps: initialize mutation seeds, select suitable seeds, add constraint, mutate, attack, and evaluate. On this basis, EasyJailbreak provides a component for each step, constructing a playground for further research and attempts. More details can be found in our paper.

Setup

There are two methods to install EasyJailbreak.

  1. For users who only require the approaches (or recipes) collected in EasyJailbreak, execute the following command:
pip install easyjailbreak
  1. For users interested in adding new components (e.g., new mutate or evaluate methods), follow these steps:
git clone https://github.com/nitwtog/EasyJailbreak.git
cd EasyJailbreak
pip install -e .

Project Structure

This project is mainly divided into three parts.

  1. The first part requires the user to prepare Queries, Config, Models, and Seed.

  2. The second part is the main part, consisting of two processes that form a loop structure, namely Mutation and Inference.

    1. In the Mutation process, the program will first select the optimal jailbreak prompts through Selector, then transform the prompts through Mutator, and then filter out the expected prompts through Filter.
    2. In the Inference process, the prompts are used to attack the Target (model) and obtain the target model's responses. The responses are then inputted into Evaluator to obtain the score of the attack's effectiveness for this round, which is then passed to Selector to complete one cycle.
  3. The third part you will get a Report. Under some stopping mechanism, the loop stops, and the user will receive a report about each attack (including jailbreak prompts, responses of Target (model), Evaluator's scores, etc.).

Project Structure

The following table shows the 4 essential components (i.e. Selectors, Mutators, Filters, Evaluators) used by each recipe implemented in our project:

Attack
Recipes
Selector Mutation Constraint Evaluator
ReNeLLM N/A ChangeStyle
InsertMeaninglessCharacters
MisspellSensitiveWords
Rephrase
GenerateSimilar
AlterSentenceStructure
DeleteHarmLess Evaluator_GenerativeJudge
GPTFuzz MCTSExploreSelectPolicy
RandomSelector
EXP3SelectPolicy
RoundRobinSelectPolicy
UCBSelectPolicy
ChangeStyle
Expand
Rephrase
Crossover
Translation
Shorten
N/A Evaluator_ClassificationJudge
ICA N/A N/A N/A Evaluator_PatternJudge
AutoDAN N/A Rephrase
CrossOver
ReplaceWordsWithSynonyms
N/A Evaluator_PatternJudge
PAIR N/A HistoricalInsight N/A N/A
JailBroken N/A Artificial
Auto_obfuscation
Auto_payload_splitting
Base64_input_only
Base64_raw
Base64
Combination_1
Combination_2
Combination_3
Disemovowel
Leetspeak
Rot13
N/A Evaluator_GenerativeJudge
Cipher N/A AsciiExpert
CaserExpert
MorseExpert
SelfDefineCipher
N/A Evaluator_GenerativeJudge
DeepInception N/A Inception N/A Evaluator_GenerativeJudge
MultiLingual N/A Translate N/A Evaluator_GenerativeJudge
GCG ReferenceLossSelector MutationTokenGradient N/A Evaluator_PrefixExactMatch
TAP SelectBasedOnScores IntrospectGeneration DeleteOffTopic Evaluator_GenerativeGetScore

Usage

Using Recipe

We have got many implemented methods ready for use! Instead of devising a new jailbreak scheme, the EasyJailbreak team gathers from relevant papers, referred to as "recipes". Users can freely apply these jailbreak schemes on various models to familiarize the performance of both models and schemes. The only thing users need to do for this is download models and utilize the provided API.

Here is a usage example:

from easyjailbreak.attacker.PAIR_chao_2023 import PAIR
from easyjailbreak.datasets import JailbreakDataset
from easyjailbreak.models.huggingface_model import HuggingfaceModel
from easyjailbreak.models.openai_model import OpenaiModel

# First, prepare models and datasets.
attack_model = HuggingfaceModel(attack_model_path='lmsys/vicuna-13b-v1.5',
                               template_name='vicuna_v1.1')
target_model = HuggingfaceModel(model_name_or_path='meta-llama/Llama-2-7b-chat-hf',
                                template_name='llama-2')
eval_model = OpenaiModel(model_name='gpt-4'
                         api_keys='input your vaild key here!!!')
dataset = JailbreakDataset('AdvBench')

# Then instantiate the recipe.
attacker = PAIR(attack_model=attack_model,
                target_model=target_model,
                eval_model=eval_model,
                jailbreakDatasets=dataset,
                n_streams=20,
                n_iterations=5)

# Finally, start jailbreaking.
attacker.attack(save_path='vicuna-13b-v1.5_llama-2-7b-chat_gpt4_AdvBench_result.jsonl')

All available recipes and their relevant information can be found in the documentation.

DIY Your Attacker

1. Load Models

You can load a model in one line of python code.

# import model prototype
from easyjailbreak.models.huggingface_model import HuggingfaceModel

# load the target model (but you may use up to 3 models in a attacker, i.e. attack_model, eval_model, target_model)
target_model = HuggingfaceModel(model_name_or_path='meta-llama/Llama-2-7b-chat-hf',
                                model_name='llama-2')

# use the target_model to generate response based on any input. Here is an example.  
target_response = target_model.generate(messages=['how to make a bomb?'])

2. Load Dataset and intialize Seed

Dataset: We prepare a class named "JailbreakDataset" to wrap the the instance list. And every instance contains query, jailbreak prompts, etc. You can either load Dataset from our online repo or your local file.

Seed: You can simply ramdomly generate initial seed.

from easyjailbreak.datasets import JailbreakDataset
from easyjailbreak.seed.seed_random import SeedRandom

# Option 1: load dataset from our online repo. Available datasets and their details can be found at https://huggingface.co/datasets/Lemhf14/EasyJailbreak_Datasets
dataset = JailbreakDataset(dataset='AdvBench')

# Option 2: load dataset from a local file
dataset = JailbreakDataset(local_file_type='csv', dataset='AdvBench.csv')

# Randomly generate initial seed
seeder = SeedRandom()
seeder.new_seeds()

3. Instantiate Components

As mentioned in Project Structure, the second part consists of 4 major components (modules, i.e. selector, mutator, filter, evaluator) and you need to instantiate them when you DIY your attack method. All available Selectors, Mutators, Filter, Evaluators and their details can be found in the documentation.

You can import the module you want by using from easyjailbreak.module_name.method_name import method_name, here is a brief instruction for you to start (the method_name is what you choose as the method in the corresponding module):

  1. Selector: from easyjailbreak.selector.method_name import method_name
  2. Mutator: from easyjailbreak.mutation.rule.method_name import method_name
  3. Filter: from easyjailbreak.constraint.method_name import method_name
  4. Evaluator: from easyjailbreak.metrics.Evaluator.method_name import method_name

Here is an example of instantiating a Selector named "RandomSelector".

from  easyjailbreak.selector.RandomSelector import RandomSelector
from easyjailbreak.datasets.jailbreak_datasets import JailbreakDataset

dataset = JailbreakDataset(dataset='AdvBench')

# Instantiate a Selector
selector = RandomSelector(dataset)

# Apply selection on the dataset
dataset = selector.select()

Citing EasyJailbreak

@inproceedings{TODO,
  title={TODO},
  author={TODO},
  booktitle={TODO},
  pages={TODO},
  year={2023}
}

About

An easy-to-use Python framework to generate adversarial jailbreak prompts.

https://easyjailbreak.github.io/

License:GNU General Public License v3.0


Languages

Language:Python 84.5%Language:Jupyter Notebook 15.5%Language:Shell 0.0%