bilibili / ailab

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RuntimeError: cuDNN filters (a.k.a. weights) must be contiguous in desired memory_format

wanstr opened this issue · comments

经常会偶发性的出现类似下面的错误。不一定能复现,就算是同一个视频有时候就正常,有时候再来一次什么都不改就出错。

Exception in thread Thread-16:
Traceback (most recent call last):
File "threading.py", line 916, in _bootstrap_inner
File "F:\RealCUGAN\runtime\inference_video.py", line 36, in run
self.res_q.put(self.inference(tmp))
File "F:\RealCUGAN\runtime\inference_video.py", line 25, in inference
res = self.model(np_frame,tile,cache_mode,alpha)
File "F:\RealCUGAN\runtime\upcunet_v3.py", line 1292, in call
result = self.tensor2np(self.model(tensor,tile_mode,cache_mode,alpha,self.pro))
File "F:\RealCUGAN\runtime\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "F:\RealCUGAN\runtime\upcunet_v3.py", line 316, in forward
opt_unet1=self.unet1.forward_b(tmp0,x_crop)
File "F:\RealCUGAN\runtime\upcunet_v3.py", line 117, in forward_b
x2 = self.conv2_up(x2)
File "F:\RealCUGAN\runtime\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "F:\RealCUGAN\runtime\torch\nn\modules\conv.py", line 925, in forward
output_padding, self.groups, self.dilation)
RuntimeError: cuDNN filters (a.k.a. weights) must be contiguous in desired memory_format

这个问题多出现在比较新的显卡如3090/3090ti 基本上每天都有 3080ti出现概率较低 其他卡也有随机概率出现

这个问题多出现在比较新的显卡如3090/3090ti 基本上每天都有 3080ti出现概率较低 其他卡也有随机概率出现

3060笔记本,平均每处理1000帧一次(Win11)

之前使用老版本的时候没出现过这个问题,一用新的pro就出现了,3080laptop显卡