realfolkcode / clip-robust-gen

Image generation with Robust CLIP

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Robust CLIP Image Generation

Open In Colab

Motivation

In this minimalist notebook, we generate images from a given text prompt by traversing through the gradients of Robust CLIP (ICML 2024, Schlarman et al.) without a generative model. Generating images from the gradients of a classifier is by no means a novel idea (see, for example, this paper). Several approaches have used CLIP to synthesize/search for an image that is similar to a prompt (e.g., CLIPig). However, its gradients are not suitable for generation since CLIP's image encoder is sensitive to small imperceptible perturbations and adversarial attacks. As such, a naive gradient descent may result in semantically poor images that have high CLIP similarity scores. To alleviate this issue, CLIPig augments samples with noise, random rotations, etc. Contrary to this, robust classifiers have perceptually-aligned gradients and perform better at generative tasks, as shown by Srinivas et al. This motivated me to check how well Robust CLIP would work in straightforward settings (i.e., without the need to introduce tricky augmentations).

Results

The results are random and not cherry picked. image

Contribution

I welcome everyone who wishes to contribute. The main object of interest is experiments with different guidance schedulers and normalizations. If you have something to show or share, please do so in Discussions.

Acknowledgements

About

Image generation with Robust CLIP


Languages

Language:Jupyter Notebook 100.0%