invoke-ai / InvokeAI

InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.

Home Page:https://invoke-ai.github.io/InvokeAI/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[bug]: Seamless Tile not seamless on certain colors

KingsmanZer0 opened this issue · comments

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

rtx 4090

GPU VRAM

24

Version number

4.2

Browser

chrome 124.0.6367

Python dependencies

No response

What happened

trying to generate seamless tiles but when generating images with light red/pink/blue colors there is a visible light brown line going down the seams after i try the tile in photoshop. tried generating a couple that has light reds/blues and issue still persist.
also tried a couple of other colors like white or dark colors but seams were fine and seamless
seams1

What you expected to happen

should be seamless transitions in colors

How to reproduce the problem

Prompt used: red flower watercolor
Model: juggernautxl9
size: 1024 x 1024
Enabled: X and Y seamless tile

Additional context

When trying to generate seamless tile an images with light red/pink/blue colors a visible light brown seam is visible. tried some other colors like white or dark colors but they seemed fine.
seams2

Discord username

tuanny87

I can't seem to reproduce this, or maybe I just don't see the issue. I've made dozens of images with x & y seamless, the same model and same or similar prompts. Can't see the line on any of them.

Here's a selection where the petals cross the central vertical axis of the image, where the line is visible:

Do you see any lines on these?

I can't seem to reproduce this, or maybe I just don't see the issue. I've made dozens of images with x & y seamless, the same model and same or similar prompts. Can't see the line on any of them.

Here's a selection where the petals cross the central vertical axis of the image, where the line is visible:

you have to join the tile in photoshop to see the seam issue it. here i took your first image you generated and made a duplicate and joined the pattern and you can see the brown line on the flower. tried this in like auto1111 and does not have this issue.
seam line

Ahh, sorry, I misunderstood the issue.

@RyanJDick @blessedcoolant I wonder if our seamless isn't quite right yet. I noticed that A1111 appears to do fully-circular tiling only. It's done by patching torch.nn.Conv2d's constructor. Very simple, but doesn't support tiling on individual axes.

There's a long-running diffusers issue about seamless, and here's the implementation that apparently was settled-on as working: huggingface/diffusers#556 (comment)

This is similar to ours, with some notable changes:

  • We skip some layers
  • We also patch nn.ConvTranspose2d
  • The diffusers implementation has special handling for LoRACompatibleConv layers

I don't understand this well enough to making any changes that aren't just guesses/experimentation.

@psychedelicious #6344 Implemented here. Seems to be working. Give it a run.

@blessedcoolant I'm asking why we skip a layer and handle the transpose layer, because the comments in the code seem to indicate that is important. At some point in the past, our implementation was like the diffusers one and we changed it to what we have now.

@blessedcoolant I'm asking why we skip a layer and handle the transpose layer, because the comments in the code seem to indicate that is important. At some point in the past, our implementation was like the diffusers one and we changed it to what we have now.

@psychedelicious Because technically we also need to change the layers in the text_encoder for seamless .. but we just do UNet and VAE. So I'm guessing skipping the last layer was a sort of "fix" to mitigate some of the prompt importance in favor of seamlessness. I cannot really tell why it was done that way.

So far in my testing, I haven't found any cases where this new/old algo has failed. It's a lot simpler and clearer on what it does.

And I don't know why there was even a check for the transposed layers coz there's none in the UNet or the VAE.

Ok. What if we patch torch directly like A1111 does? That would automatically apply to every model.

@psychedelicious He doesn't actually use that function. He just applies circular padding to the layers as per requirement of the generation which is what needs be done and what we are doing right now. https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0/modules/sd_hijack.py#L311

@blessedcoolant Ahhh I see. Thanks for clearing that up.