zzh-tech / ESTRNN

[ECCV2020 Spotlight] Efficient Spatio-Temporal Recurrent Neural Network for Video Deblurring

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

NotImplementedError: There were no tensor arguments to this function

Entretoize opened this issue · comments

I receive an error I don't understand when trying to run the inference script:

(py36pt1.6) H:\git\VideoDebluring2\ESTRNN>python inference.py
Traceback (most recent call last):
  File "inference.py", line 58, in <module>
    output_seq = model([input_seq, ])
  File "H:\miniconda\envs\py36pt1.6\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "H:\miniconda\envs\py36pt1.6\lib\site-packages\torch\nn\parallel\data_parallel.py", line 166, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "H:\miniconda\envs\py36pt1.6\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "H:\git\VideoDebluring2\ESTRNN\model\model.py", line 15, in forward
    outputs = self.module.feed(self.model, iter_samples)
  File "H:\git\VideoDebluring2\ESTRNN\model\ESTRNN.py", line 233, in feed
    outputs = model(inputs)
  File "H:\miniconda\envs\py36pt1.6\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "H:\git\VideoDebluring2\ESTRNN\model\ESTRNN.py", line 209, in forward
    return torch.cat(outputs, dim=1)
NotImplementedError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat.  This usually means that this function requires a non-empty list of Tensors, or that you (the operator writer) forgot to register a fallback function.  Available functions are [CPU, CUDA, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].

CPU: registered at aten\src\ATen\RegisterCPU.cpp:18433 [kernel]
CUDA: registered at aten\src\ATen\RegisterCUDA.cpp:26496 [kernel]
QuantizedCPU: registered at aten\src\ATen\RegisterQuantizedCPU.cpp:1068 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:47 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradCPU: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradCUDA: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradXLA: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradLazy: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradXPU: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradMLC: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradHPU: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradNestedTensor: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradPrivateUse1: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradPrivateUse2: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
AutogradPrivateUse3: registered at ..\torch\csrc\autograd\generated\VariableType_3.cpp:10141 [autograd kernel]
Tracer: registered at ..\torch\csrc\autograd\generated\TraceType_3.cpp:11560 [kernel]
UNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:466 [backend fallback]
Autocast: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:305 [backend fallback]
Batched: registered at ..\aten\src\ATen\BatchingRegistrations.cpp:1016 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]

Can someone help ?

Is this your command?

python inference.py

Yes but I modified it not to have to write parameters each time:

if __name__ == '__main__':
    parser = ArgumentParser()
    parser.add_argument('--src', type=str, default="jardin", help="the path of input video or video dir")
    parser.add_argument('--ckpt', type=str, default="checkpoints/ESTRNN_C80B15_BSD_2ms16ms.tar", help="the path of checkpoint of pretrained model")
    parser.add_argument('--dst', type=str, default="result", help="where to store the results")
    args = parser.parse_args()

What is the image file format in this dir "jardin"?
If you don't mind, you can share the contents of this folder with me.

After your message I tried with the BSD data and it doesn't worked, then as I already successfully tried your script I redownloaded all py files and it works, then sorry I think I had modified something somewhere that corrupted the code...

Maybe I should open a new issue, tell me, but I have a small issue, the script works great with blurred images but when there's no blur at all the original image is sharpen than the result, is there a way to have a threshold to prevent adding blur on sharp images or on sharp portion of an image ?
(sample images : https://linkall.pro/ESTRNN/000001.png and 000002.png...)

Maybe you can design some solutions with blur detection.

Yes, I'll compare original and modifed and keep the best after computation.
Thanks