GQAdonis / PuLID_ComfyUI

PuLID native implementation for ComfyUI

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

PuLID ComfyUI

PuLID ComfyUI native implementation.

basic workflow

Notes

The code can be considered beta, things may change in the coming days. In the examples directory you'll find some basic workflows.

The original implementation makes use of a 4-step lighting UNet. I made a few comparisons with the official Gradio demo using the same model in ComfyUI and I can't see any noticeable difference, meaning that this code should be faithful to the orignal. The Lightning lora doesn't work as well.

Testing other models though I noticed some quality degradation. You may need to experiment with CFG and various samplers/schedulers (try sgm_uniform).

The quality of the reference image is very important. Maybe this is because of the Eva CLIP that gets more details. Be sure to use a clean and sharp picture!

For IPAdapter compatibility you need to update the IPAdapter extension!

The 'method' parameter

method applies the weights in different ways. Fidelity is closer to the reference ID, Style leaves more freedom to the checkpoint. Sometimes the difference is minimal. I've added neutral that doesn't do any normalization so the reference is very strong and you need to lower the weight.

Installation

  • PuLID pre-trained model goes in ComfyUI/models/pulid/ (thanks to Chenlei Hu for converting them into IPAdapter format)
  • The EVA CLIP is EVA02-CLIP-L-14-336, but should be downloaded automatically (will be located in the huggingface directory).
  • facexlib dependency needs to be installed, the models are downloaded at first use
  • Finally you need InsightFace with AntelopeV2, the unzipped models should be placed in ComfyUI/models/insightface/models/antelopev2.

About

PuLID native implementation for ComfyUI

License:Apache License 2.0


Languages

Language:Python 100.0%