Sygil-Dev / stable-diffusion

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Bug]ZeroDivisionError: division by zero. No i2i result since yesterday update

LiJT opened this issue · comments

commented

Traceback (most recent call last):
File "C:\Users\charl\miniconda3\envs\ldm\lib\site-packages\gradio\routes.py", line 247, in run_predict
output = await app.blocks.process_api(
File "C:\Users\charl\miniconda3\envs\ldm\lib\site-packages\gradio\blocks.py", line 641, in process_api
predictions, duration = await self.call_function(fn_index, processed_input)
File "C:\Users\charl\miniconda3\envs\ldm\lib\site-packages\gradio\blocks.py", line 556, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\charl\miniconda3\envs\ldm\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\charl\miniconda3\envs\ldm\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\charl\miniconda3\envs\ldm\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "scripts/webui.py", line 1288, in img2img
output_images, seed, info, stats = process_images(
File "scripts/webui.py", line 923, in process_images
grid = image_grid(output_images, batch_size)
File "scripts/webui.py", line 414, in image_grid
cols = math.ceil(len(imgs) / rows)
ZeroDivisionError: division by zero

屏幕截图 2022-09-01 141425

I encountered this bug that never happened before. It occurs when I

  1. image2image, draw a mask, tweak some setting, click "only generate the masked area"
  2. Then boom, the error above occurs, no result coming back

it says ZeroDivisionError: division by zero, I dont know what to do. I just run the sample image nothing special.

commented

After testing, I found where the issue is!

If I uncheck "save individual images", this same error will be guarranteed.
But sometimes I just wanna save as grid only. to save a bit of space.

Is it fixable?

commented

Pushed major txt2img ui overhaul that has img2img working in my tests
Please update and open a new issue if the problem recurs

commented
Traceback (most recent call last):
  File "C:\Users\PEJO\miniconda3\envs\ldo\lib\site-packages\gradio\routes.py", line 247, in run_predict
    output = await app.blocks.process_api(
  File "C:\Users\PEJO\miniconda3\envs\ldo\lib\site-packages\gradio\blocks.py", line 641, in process_api
    predictions, duration = await self.call_function(fn_index, processed_input)
  File "C:\Users\PEJO\miniconda3\envs\ldo\lib\site-packages\gradio\blocks.py", line 556, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\PEJO\miniconda3\envs\ldo\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\Users\PEJO\miniconda3\envs\ldo\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "C:\Users\PEJO\miniconda3\envs\ldo\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "scripts/webui.py", line 1053, in txt2img
    output_images, seed, info, stats = process_images(
  File "scripts/webui.py", line 969, in process_images
    grid = image_grid(output_images, batch_size)
  File "scripts/webui.py", line 456, in image_grid
    cols = math.ceil(len(imgs) / rows)
ZeroDivisionError: division by zero
Shutting down...

Still getting this error with latest repo (d667ff5) in Test-To-Image, happens only if:

  • Upscale images using RealESRGAN is On (I did get the model and put it into the directory as noted in Installation instructions)
  • Number of images to generate > 1

If the number of images is 1 (with RealESRGAN) then empty data is received thru websocket (nothing shows up).

EDit: Fix faces using GFPGAN seems to cause this error too

commented

@wereii Yes I have the exact same error too. Seems it not been fixed.
If I uncheck "save individual images", this same error will still occurs.

@hlky Could you kindly please have a second look at this issue?