pbaylies / Augmented_CLIP

Training simple models to predict CLIP image embeddings from text embeddings, and vice versa.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Train aug_clip against laion400m-embeddings found here: https://laion.ai/laion-400-open-dataset/ - note that this used the base ViT-B/32 CLIP model.

sample image

  • Update - add a model with weights averaged from the other six, add simple averaging script.

sample image

Sample notebook adapted from Sadnow's 360Diffusion repo, thanks to all involved!

Latest revision: Beta 1.52 (10/11/21): https://colab.research.google.com/github/sadnow/360Diffusion/blob/main/360Diffusion_Public.ipynb

Latest highlights: Full compatibility for both 256 and 512 model for upscaling to 256,512,1024,2048, and 4096px.

Note that 4096 files aren’t quite as pretty as 2048, and they’re massive in file size. 2048 is appealing in most cases. If you intend on upscaling to anything higher than 1024, I recommend using the 512 diffusion model found in the settings-

Credits & Acknowledgements

Prior release(s): Implemented Daniel Russ’s Perlin revisions, fixed init_bug, 4096 double-pass, VRAM fixes, practical debug_mode (set to higher skip_timestep)

All edits & additions are welcome and appreciated~

About

Training simple models to predict CLIP image embeddings from text embeddings, and vice versa.


Languages

Language:Jupyter Notebook 86.9%Language:Python 13.1%