lllyasviel / stable-diffusion-webui-forge

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Bug]: Images are rendered, but do not appear in txt2img

askAvoid opened this issue · comments

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

When I create an image with txt2img, the operation succeeds: I can see the file on disk and view the image using the Image Browser extension, however it does not appear in the txt2img pane. I just get:

Error
Connection timed out.

In a top-right error message box.

Steps to reproduce the problem

This seems to do with the amount of VRAM the operation takes?

If I generate an image without High-res fix, it renders fine. Turn on High-res fix and it no longer appears (but still renders).

The same thing happens with images generated at 2048x2048 resolution, but they work fine at 1024x1024.

Again, the images generate and are saved to disk, they just not render in the txt2img tab.

What should have happened?

The image/s should have rendered.

Also, the error "connection timed out" is not helpful and needs to be improved.

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

    "Platform": "Linux-6.8.0-35-generic-x86_64-with-glibc2.39",
    "Python": "3.10.14",
    "Version": "f0.0.17v1.8.0rc-latest-276-g29be1da7",
    "Commit": "29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7",
    "Script path": "/home/askavoid/build/stable-diffusion-webui-forge",
    "Data path": "/home/askavoid/build/stable-diffusion-webui-forge",
    "Extensions dir": "/home/askavoid/build/stable-diffusion-webui-forge/extensions",
    "Checksum": "bf5d9377f011218503a350d38ac5bfa594985da5296598f8b37477c97efd877d",
    "Commandline": [
        "launch.py",
        "--api",
        "--listen",
        "--gradio-auth",
        "<hidden>"
    ],
    "Torch env info": {
        "torch_version": "2.1.2+cu121",
        "is_debug_build": "False",
        "cuda_compiled_version": "12.1",
        "gcc_version": "(Ubuntu 13.2.0-23ubuntu4) 13.2.0",
        "clang_version": null,
        "cmake_version": "version 3.28.3",
        "os": "Ubuntu 24.04 LTS (x86_64)",
        "libc_version": "glibc-2.39",
        "python_version": "3.10.14 (main, Mar 21 2024, 16:24:04) [GCC 11.2.0] (64-bit runtime)",
        "python_platform": "Linux-6.8.0-35-generic-x86_64-with-glibc2.39",
        "is_cuda_available": "True",
        "cuda_runtime_version": "12.0.140",
        "cuda_module_loading": "LAZY",
        "nvidia_driver_version": "535.171.04",
        "nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 4090",
        "cudnn_version": null,
        "pip_version": "pip3",
        "pip_packages": [
            "numpy==1.26.2",
            "open-clip-torch==2.20.0",
            "pytorch-lightning==1.9.4",
            "torch==2.1.2+cu121",
            "torchdiffeq==0.2.3",
            "torchmetrics==1.4.0.post0",
            "torchsde==0.2.6",
            "torchvision==0.16.2+cu121",
            "triton==2.1.0"
        ],

The rest looks irrelevant as this is a fresh installation with no options changed.

Console logs

[Memory Management] Current Free GPU Memory (MB) =  20772.8984375
[Memory Management] Model Memory (MB) =  4210.9375
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  15537.9609375
Moving model(s) has taken 0.67 seconds
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 45/45 [00:03<00:00, 11.57it/s]
Cleanup minimal inference memory.████████████████████████████████████████████████▍                                     | 45/73 [00:04<00:02, 11.57it/s]
tiled upscale: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:01<00:00, 20.81it/s]
token_merging_ratio = 0.3
To load target model SDXL
Begin to load 1 model
Reuse 1 loaded models
[Memory Management] Current Free GPU Memory (MB) =  20751.81982421875
[Memory Management] Model Memory (MB) =  4210.9375
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  15516.88232421875
Moving model(s) has taken 0.46 seconds
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 28/28 [00:06<00:00,  4.50it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 73/73 [00:14<00:00,  5.06it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 73/73 [00:14<00:00,  4.50it/s]


### Additional information

_No response_

Client-side issue, resolving.