replicate / cog-stable-diffusion

Diffusers Stable Diffusion as a Cog model

Home Page:https://replicate.com/stability-ai/stable-diffusion

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ValueError: operands could not be broadcast together with shapes (2,) (8,)

matthewtoast opened this issue · comments

Version a9758cb this morning started to give me the following error for all img2img queries via the API:

Running predict()...
Using seed: 36432201
Traceback (most recent call last):
File "/src/src/cog/python/cog/server/worker.py", line 209, in _predict
result = self._predictor.predict(**payload)
File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
return func(*args, **kwargs)
File "/src/predict.py", line 107, in predict
output = self.pipe(
File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/src/image_to_image.py", line 112, in __call__
self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/schedulers/scheduling_pndm.py", line 115, in set_timesteps
prk_timesteps = np.array(self._timesteps[-self.pndm_order :]).repeat(2) + np.tile(
ValueError: operands could not be broadcast together with shapes (2,) (8,)

Note: I can only repro this when using API calls. I can't get it to happen via the Replicate web UI. Example parameters:

  num_outputs: 1,
  num_inference_steps: 1,
  guidance_scale: 15,
  prompt_strength: 0.7,
  init_image: 'https://sdui-staging.imgix.net/uploads/cl9hefbnr00743r44eyfhoox7/file-bddcaaabd0334f6fb363f6c058103fa97d41c715-undefined?w=1024&h=1024',
  prompt: 'painting of a close up of a black dog on a leash',
  seed: 238206034

Here's some add'l info: This only seems to occur when num_inference_steps is 1. It seems like the most recent version of this model works fine with this setting, but the older versions don't like it.