layerdiffusion / sd-forge-layerdiffuse

[WIP] Layer Diffusion for WebUI (via Forge)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

High-level instruction on how to use the LatentTransparencyOffsetEncoder model

kevsak5 opened this issue · comments

Hi,
Thanks again for this awesome tech.
I see in several issues that Encoder support will come in the near future. Thank you for that.
In the mean time, if I want to use LatentTransparencyOffsetEncoder and test a few things out, what's the expected input?
From reading the decoder,

  1. seems like the input should be alpha then rgb. Is this correct?
  2. Are the input values [-1,1] or [0,1]
  3. From my simple testing of autoencoding, e.g. LatentTransparencyOffsetEncoder(alpha, RGB) + sdvae.encode( masked_rgb ) -> decode, it seems like not adding the offset performs better. Is this expected?
    Thanks again.

Me either, I am wondering how this latent offset Encoder should work.

3. LatentTransparencyOffsetEncoder(alpha, RGB) + sdvae.encode( masked_rgb ) -> decode, it seems like not adding the offset performs better. Is this expected?
Thanks again.

Hello, have you figured out yet?

Hey people, see also updates:

#90 (comment)