Yuchen413 / text2image_safety

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SneakyPrompt: Jailbreaking Text-to-image Generative Models

This if the official implementation for paper: SneakyPrompt: Jailbreaking Text-to-image Generative Models

Our work has been reported by MIT Technology Review and JHU Hub. Please check them out if interested.

Environment setup

The experiment is run on Ubuntu 18.04, with one Nvidia 3090 GPU (24G). Please install the dependencies via:

conda env create -f environment.yml

Dataset

The nsfw_200.txt can be access per request, please send the author an email for password.

Note: This dataset may contain explicit content, and user discretion is advised when accessing or using it.

  • Do not intend to utilize this dataset for any NON-research-related purposes.
  • Do not intend to distribute or publish any segments of the data.

Search adversarial prompt:

python main.py --target='sd' --method='rl' --reward_mode='clip' --threshold=0.26 --len_subword=10 --q_limit=60 --safety='ti_sd'

You can change the parameters follow the choices in 'search.py'. The adversarial prompts and statistic results (xx.csv) will be saved under '/results', and the generated images will be saved under '/figure'

Evaluate the result:

python evaluate.py --path='PATH OF xx.csv'

Citation:

Please cite our paper if you find this repo useful.

@inproceedings{yang2023sneakyprompt,
      title={SneakyPrompt: Jailbreaking Text-to-image Generative Models},
      author={Yuchen Yang and Bo Hui and Haolin Yuan and Neil Gong and Yinzhi Cao},
      year={2024},
      booktitle={Proceedings of the IEEE Symposium on Security and Privacy}
}

About

License:MIT License


Languages

Language:Python 63.2%Language:PureBasic 36.7%Language:Shell 0.1%