alex04072000 / FuSta

Hybrid Neural Fusion for Full-frame Video Stabilization

Home Page:https://alex04072000.github.io/FuSta/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RuntimeError: CUDA out of memory. Tried to allocate 614.00 MiB (GPU 0; 15.78 GiB total capacity; 12.94 GiB already allocated; 584.75 MiB free; 14.16 GiB reserved in total by PyTorch)

lbqdhg opened this issue · comments

Hello, I use high RAM to run, and the following problems occur, how can I solve it?

image
`00001.png
/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/functional.py:2941: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/functional.py:3121: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
Traceback (most recent call last):
File "run_FuSta.py", line 317, in
frame_out = model(input_frames, F_kprime_to_k, forward_flows, backward_flows)
File "/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/FuSta/models_arbitrary/init.py", line 12, in forward
return self.model(input_frames, F_kprime_to_k, F_n_to_k_s, F_k_to_n_s)
File "/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/FuSta/models_arbitrary/adacofnet.py", line 662, in forward
I_pred, C = self.refinementNetwork(torch.cat([tenWarpedFeat[i], global_average_pooled_feature, tenWarpedMask[i]], 1))
File "/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/FuSta/models_arbitrary/adacofnet.py", line 240, in forward
x_1 = self.layer1(x_0)
File "/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/FuSta/models_arbitrary/adacofnet.py", line 158, in forward
x_a = self.ch_a(x)
File "/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/FuSta/models_arbitrary/adacofnet.py", line 116, in forward
x = x * self.gated(mask)
RuntimeError: CUDA out of memory. Tried to allocate 614.00 MiB (GPU 0; 15.78 GiB total capacity; 12.94 GiB already allocated; 584.75 MiB free; 14.16 GiB reserved in total by PyTorch)

CalledProcessError Traceback (most recent call last)
in ()
----> 1 get_ipython().run_cell_magic('shell', '', 'eval "$(conda shell.bash hook)" # copy conda command to shell\nconda deactivate\nconda activate FuSta\ncd /content/FuSta/\npython run_FuSta.py --load FuSta_model/checkpoint/model_epoch050.pth --input_frames_path input_frames/ --warping_field_path CVPR2020_warping_field/ --output_path output/ --temporal_width 41 --temporal_step 4')

2 frames
/usr/local/lib/python3.7/dist-packages/google/colab/_system_commands.py in check_returncode(self)
137 if self.returncode:
138 raise subprocess.CalledProcessError(
--> 139 returncode=self.returncode, cmd=self.args, output=self.output)
140
141 def repr_pretty(self, p, cycle): # pylint:disable=unused-argument

CalledProcessError: Command 'eval "$(conda shell.bash hook)" # copy conda command to shell
conda deactivate
conda activate FuSta
cd /content/FuSta/
python run_FuSta.py --load FuSta_model/checkpoint/model_epoch050.pth --input_frames_path input_frames/ --warping_field_path CVPR2020_warping_field/ --output_path output/ --temporal_width 41 --temporal_step 4' returned non-zero exit status 1.`

It looks like you are trying to stabilise a video with resolution 800x1422. My guess that one of the involved networks (probably, RAFT ) requires more memory for processing than a Colab GPU can give. Not sure what can I advise rather than decreasing input video resolution.

Thansk! By the way,compared to the original video after processing, there will be a few more frames, right?

There shouldn't be any extra frames added. At least, there wasn't any when I have tested it last time. Did you encounter such a situation or just asking?

I encountered this situation and I got it just now.What I need is 30 frames, as the default is 25 frames. Modified and got what I want. Thansks very much for your time.