xLuge's starred repositories
Awesome-Diffusion-Models
A collection of resources and papers on Diffusion Models
magic-animate
[CVPR 2024] MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
multidiffusion-upscaler-for-automatic1111
Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
Moore-AnimateAnyone
Character Animation (AnimateAnyone, Face Reenactment)
Open-AnimateAnyone
Unofficial Implementation of Animate Anyone
Mask2Former
Code release for "Masked-attention Mask Transformer for Universal Image Segmentation"
PatchFusion
[CVPR 2024] An End-to-End Tile-Based Framework for High-Resolution Monocular Metric Depth Estimation
MotionDirector
[ECCV 2024 Oral] MotionDirector: Motion Customization of Text-to-Video Diffusion Models.
Multi-LoRA-Composition
Repository for the Paper "Multi-LoRA Composition for Image Generation"
pytorch_mgie
A Gradio demo of MGIE
BakedAvatar
Pytorch Code for "BakedAvatar: Baking Neural Fields for Real-Time Head Avatar Synthesis"
awesome-image-inpainting-studies
A collection of awesome image inpainting studies.
Conffusion
Official Implementation for the "Conffusion: Confidence Intervals for Diffusion Models" paper.
EXE-GAN
Facial image inpainting is a task of filling visually realistic and semantically meaningful contents for missing or masked pixels in a face image. This paper presents EXE-GAN, a novel diverse and interactive facial inpainting framework, which can not only preserve the high-quality visual effect of the whole image but also complete the face image with exemplar-like facial attributes.