Image-to-image style transfer
This is a repo for some hacking around image-to-image style transfer methods.
I am interested in the approach detailed in the paper: “Inversion-Based Style Transfer with Diffusion Models” (CVPR 2023)
I am mainly interested in taking apart the code to understand the approach better. I find that refactoring and trying to make code more robust is a good way to do that.
Initial objectives
- fix the bugs
- refactor to use up-to-date dependencies
- refactor to make code more clear, robust, extensible; split apart the copy/pasted code (e.g. Stable Diffusion ldm) and attribute accordingly
I want to take apart and/or implement Neural Style Transfer (i.e. CNN) and GAN based approaches and compare the three for similar tasks