AUTOMATIC1111 / stable-diffusion-webui-tensorrt

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cannot generate at higher res than 512x512

ellugia opened this issue · comments

Hi there, I've finished setting up TensorRT and converting my models to onnx -> trt and works fine at default res, but if I try anything higher than that I get these errors in my terminal:

06/06/2023-01:18:53] [TRT] [E] 3: [executionContext.cpp::nvinfer1::rt::ExecutionContext::validateInputBindings::2082] Error Code 3: API Usage Error (Parameter check failed at: executionContext.cpp::nvinfer1::rt::ExecutionContext::validateInputBindings::2082, condition: profileMaxDims.d[i] >= dimensions.d[i]. Supplied binding dimension [2,4,128,128] for bindings[0] exceed min ~ max range at index 2, maximum dimension in profile is 64, minimum dimension in profile is 64, but supplied dimension is 128.
)
  0%|                                                                                                                                                                                    | 0/150 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(fiuk2qqxxz1f740)', 'cute kawaii, (sephiroth plush toy, one winged angel, evil:1.1), realistic texture, visible stitch line, soft smooth lighting, vibrant studio lighting, modular constructivism, physically based rendering, square image, final fantasy vii remake style', '', [], 150, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 1024, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001943755D270>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000019474D210F0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000019474D22B90>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\call_queue.py", line 55, in f
        res = list(func(*args, **kwargs))
      File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\call_queue.py", line 35, in f
        res = func(*args, **kwargs)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\txt2img.py", line 57, in txt2img
        processed = processing.process_images(p)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\processing.py", line 618, in process_images
        res = process_images_inner(p)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\processing.py", line 737, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\processing.py", line 988, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 433, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
      File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 275, in launch_sampling
        return func()
      File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 433, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
      File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 155, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
      File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\modules\sd_unet.py", line 89, in UNetModel_forward
        return current_unet.forward(x, timesteps, context, *args, **kwargs)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\extensions\stable-diffusion-webui-tensorrt\scripts\trt.py", line 86, in forward
        self.infer({"x": x, "timesteps": timesteps, "context": context})
      File "D:\AUTOMATIC1111\stable-diffusion-webui\extensions\stable-diffusion-webui-tensorrt\scripts\trt.py", line 69, in infer
        self.allocate_buffers(feed_dict)
      File "D:\AUTOMATIC1111\stable-diffusion-webui\extensions\stable-diffusion-webui-tensorrt\scripts\trt.py", line 63, in allocate_buffers
        raise Exception(f'bad shape for TensorRT input {binding}: {tuple(shape)}')
    Exception: bad shape for TensorRT input x: (2, 4, 128, 128)
commented

When you converted your model what max width and max height did you select. if you only selected 512x512 max then any size larger will give this error.

Try converting the models with min AND max width at 768 and min AND max height at 448 (multiples of 64!) or the other way around for portraits. You need to use the same values.

Try to work down step by step from batch size 6 or so, until it starts conversion. 4 should work, if i remember correctly.

You basically need to bake models for every resolution you want to create, if you want them optimized for maximum batch size too.

512512 + 512512 works with batch size 6, for example. I'm getting a sum of 14 it/s on my 3060 12 gb with that.

Thank you both for the answers. Honestly, I feel a bit foolish now after going over the conversion steps again and realizing it was such an obvious detail. The whole topic of AI image generation can be so dense sometimes that I kept following a tutorial word for word without paying much attention to what I was doing. Thank you!