baowenbo / DAIN

Depth-Aware Video Frame Interpolation (CVPR 2019)

Home Page:https://sites.google.com/view/wenbobao/dain

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Google Colab CUDA error

aiXander opened this issue · comments

Tried to get this running on Colab, but I'm running into cuda issues...
Link to notebook.

Any ideas on how to fix this? Would be great to just have a Colab to experiment!

error in correlation_forward_cuda_kernel: no kernel image is available for execution on the device
Warning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function) (THPFunction_do_forward at /pytorch/torch/csrc/autograd/python_function.cpp:622)

Traceback (most recent call last):
File "demo_MiddleBury.py", line 131, in
y_s,offset,filter = model(torch.stack((X0, X1),dim = 0))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/content/DAIN/networks/DAIN.py", line 149, in forward
self.forward_flownets(self.flownets, cur_offset_input, time_offsets=time_offsets),
File "/content/DAIN/networks/DAIN.py", line 205, in forward_flownets
temp = model(input) # this is a single direction motion results, but not a bidirectional one
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/content/DAIN/PWCNet/PWCNet.py", line 220, in forward
corr6 = self.corr(c16, c26)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/content/DAIN/PWCNet/correlation_package_pytorch1_0/correlation.py", line 59, in forward
result = CorrelationFunction(self.pad_size, self.kernel_size, self.max_displacement,self.stride1, self.stride2, self.corr_multiply)(input1, input2)
File "/content/DAIN/PWCNet/correlation_package_pytorch1_0/correlation.py", line 27, in forward
self.pad_size, self.kernel_size, self.max_displacement,self.stride1, self.stride2, self.corr_multiply)

RuntimeError: CUDA call failed (correlation_forward_cuda at correlation_cuda.cc:80)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fe6bc85e193 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #1: correlation_forward_cuda(at::Tensor&, at::Tensor&, at::Tensor&, at::Tensor&, at::Tensor&, int, int, int, int, int, int) + 0x628 (0x7fe6b8f59ad8 in /usr/local/lib/python3.6/dist-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #2: + 0x1bd3a (0x7fe6b8f69d3a in /usr/local/lib/python3.6/dist-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #3: + 0x18880 (0x7fe6b8f66880 in /usr/local/lib/python3.6/dist-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #4: python3() [0x50ac25]

i have same issues
error in correlation_forward_cuda_kernel: no kernel image is available for execution on the device
RuntimeError: CUDA call failed (correlation_forward_cuda at correlation_cuda.cc:80)

fixed this issues by setting nvcc '-gencode', 'arch=compute_70,code=compute_70'
GTX 2080Ti
python 3.6
torch 1.0.1
cuda 9.0
cudnn 7.4

Yeah we waiting for colab version, Goodluck.

If anyone succeeds with Google Colab, please share working 'notebook'
BIG thanks in advance :)

commented

Mine seems to work just following the instructions: https://colab.research.google.com/drive/1hkPQQNRH1ykKJN6V7tTiGnZOHWm7JRdr

They must have updated the code.

@btahir Thanks for sharing this link!
For last two days I was trying to make it work. Unfortunately I always got some trouble. The best I could get was partially working first demo, processing only 10th and 11th frame of randomly chosen part of MiddleBurry image set (with Your CV2 mod). It was producing frame between 10&11, but stopping with error after that each run. What is strange, sometimes it failed in the beginning, sometimes it partially worked. Like each run had SOME chance to partially succeed. I'm too stupid to make sense out of it.
I hope someone can make a tutorial for dumb-asses like me how to run DAIN on Google Colab notebook. DAIN-APP is no go for me, I don't have NVidia GPU. I hoped I could use DAIN for my pixelart work.

Sorry for a lot of off-topic, won't happen again :)

Is there anyone who has used the TPU on the Google Colab to run this program?

commented

Mine seems to work just following the instructions: https://colab.research.google.com/drive/1hkPQQNRH1ykKJN6V7tTiGnZOHWm7JRdr

They must have updated the code.

Is there anyone who has used the TPU on the Google Colab to run this program?

Follow this colab, but Install scipy 1.0.0 and remove the %%%writefile cell.

It really depends which GPU is used by colab in the backend.
In my case its one with Turing architecture (like Tesla T4), in which case the NVCC compiler arguments have to be adjusted in the packages. ( Maybe in the future these packages detect that at compile time ;)
Also imread from scipy was deprecated in my version. https://docs.scipy.org/doc/scipy-1.2.1/reference/generated/scipy.misc.imread.html.
Use an older Version or transfer to opencv/PIllow etc.

Anyway if you don't know how to do it, have a look at my fork as an example:
https://github.com/whplh/DAIN. There is a colab file too. But better write your own. =)
Cheers and happy coding.

How to use this video for this project?
(I have no programming experience:((

@tamu520 - there's a very early version of it packaged as an app here - https://grisk.itch.io/dain-app
That might be easier for you.

@Gunslap Thanks for the answer, but I am referring to how to test my video in a project in Goole Colab

commented

Hi! We've made some progress on this front. The latest version of a working Colab notebook can be found here: https://github.com/AlphaGit/DAIN/blob/master/Colab_DAIN_alpha.ipynb

Once we get it updated on the upstream repo (StylerDollar's one), we will submit a PR for this repo. In the meantime, you can upload that notebook manually and change cell 5 so that it uses my repo, which has the latest.

@AlphaGit Thanks for sharing, but sometimes '# Interpolation' will give an error

@AlphaGit
On your notebook, following the steps on a local machine, I face the following error:
File "/home/user/DAIN/PWCNet/correlation_package_pytorch1_0/correlation.py", line 4, in
import correlation_cuda
ModuleNotFoundError: No module named 'correlation_cuda'
I am on CUDA 10, RTX 2080Ti Python 3.6 PyTorch 1.1.0.

And I believe %shell commands don't work locally, any suggestions regarding what I could replace the commands which use %shell, other than the ! symbol.

(Also, I'm sorry if this isn't the right place to ask this question)

commented

@tamu520 Kind of depends what the error is. If you're getting that "No module named correlation_cuda", it means you missed running the build cell.

@SreeHarshaNelaturu You might be missing the step to build the modules that DAIN has. That's the cell that has the warning about taking long. Yes, using ! might be what you're looking for.

@AlphaGit Thanks a bunch for sharing! But (like probably everyone here...) I've got a problem. When I try to check the GPU with
!nvidia-smi --query-gpu=gpu_name,driver_version,memory.total --format=csv
I always get the message:
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
I don't understand it, because I'm using a normal Google Colab Notebook. Sorry if it is something very obvious but I'm no expert here.
Thanks in advance!

@SreeHarshaNelaturu thank you! It wasn't set before, but I just activated it, tried again and it worked.

Sorry to bother again, but...
I've already tested the Notebook on a short video and it went smooth start to finish, however, now I'm trying again, but it sends me an error when I try to do the last step, i think it doesn't recognize the Google Drive path where the output frames are (already checked, it's all there) Could somebody give me a hand? This is the error that comes out:

%cd '/content/gdrive/My Drive/DAIN/tmp' %shell ffmpeg -y -r 120 -f image2 -pattern_type glob -i '*.png' '/content/gdrive/My Drive/InputVideo/felixfinal.mp4'

[Errno 2] No such file or directory: '/content/gdrive/My Drive/DAIN/tmp'
/content/DAIN
ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers
  built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
  configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100
  libavdevice    57. 10.100 / 57. 10.100
  libavfilter     6.107.100 /  6.107.100
  libavresample   3.  7.  0 /  3.  7.  0
  libswscale      4.  8.100 /  4.  8.100
  libswresample   2.  9.100 /  2.  9.100
  libpostproc    54.  7.100 / 54.  7.100
[image2 @ 0x55a91d29c000] Could not open file : *.png
[image2 @ 0x55a91d29c000] Could not find codec parameters for stream 0 (Video: png, none(pc)): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Input #0, image2, from '*.png':
  Duration: 00:00:00.01, start: 0.000000, bitrate: N/A
    Stream #0:0: Video: png, none(pc), 120 tbr, 120 tbn, 120 tbc
Output #0, mp4, to '/content/gdrive/My Drive/InputVideo/felixfinal.mp4':
Output file #0 does not contain any stream
---------------------------------------------------------------------------
CalledProcessError                        Traceback (most recent call last)
<ipython-input-20-1a3121e3883a> in <module>()
      1 get_ipython().magic("cd '/content/gdrive/My Drive/DAIN/tmp'")
----> 2 get_ipython().magic("shell ffmpeg -y -r 120 -f image2 -pattern_type glob -i '*.png' '/content/gdrive/My Drive/InputVideo/felixfinal.mp4'")

3 frames
/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py in magic(self, arg_s)
   2158         magic_name, _, magic_arg_s = arg_s.partition(' ')
   2159         magic_name = magic_name.lstrip(prefilter.ESC_MAGIC)
-> 2160         return self.run_line_magic(magic_name, magic_arg_s)
   2161 
   2162     #-------------------------------------------------------------------------

/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py in run_line_magic(self, magic_name, line)
   2079                 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals
   2080             with self.builtin_trap:
-> 2081                 result = fn(*args,**kwargs)
   2082             return result
   2083 

/usr/local/lib/python3.6/dist-packages/google/colab/_system_commands.py in _shell_line_magic(line)
     68   """
     69   result = _run_command(line, clear_streamed_output=False)
---> 70   result.check_returncode()
     71   return result
     72 

/usr/local/lib/python3.6/dist-packages/google/colab/_system_commands.py in check_returncode(self)
    136     if self.returncode:
    137       raise subprocess.CalledProcessError(
--> 138           returncode=self.returncode, cmd=self.args, output=self.output)
    139 
    140   def _repr_pretty_(self, p, cycle):  # pylint:disable=unused-argument

CalledProcessError: Command 'ffmpeg -y -r 120 -f image2 -pattern_type glob -i '*.png' '/content/gdrive/My Drive/InputVideo/felixfinal.mp4'' returned non-zero exit status 1.

As a short note, Drive was succesfully mounted at /content/gdrive.

commented

[Errno 2] No such file or directory: '/content/gdrive/My Drive/DAIN/tmp'

Are you sure that directory exists? The error seems to imply it didn’t. Make sure you have the variables correctly set up.

Hello! Google Colab has been down for two days now. Endlessly loading the item "Install dependencies". Before that, everything worked.

Yeah I'm getting this error on styler00dollar's Colab now:

Running:

!python colab_interpolate.py --netName DAIN_slowmotion --time_step {fps/TARGET_FPS} --start_frame 1 --end_frame {pngs_generated_count} --frame_input_dir '{FRAME_INPUT_DIR}' --frame_output_dir '{FRAME_OUTPUT_DIR}'

Gives

/content/DAIN
Traceback (most recent call last):
  File "colab_interpolate.py", line 7, in <module>
    import networks
  File "/content/DAIN/networks/__init__.py", line 1, in <module>
    from .DAIN import DAIN
  File "/content/DAIN/networks/DAIN.py", line 4, in <module>
    from my_package.FilterInterpolation import  FilterInterpolationModule
  File "/content/DAIN/my_package/FilterInterpolation/__init__.py", line 1, in <module>
    from .FilterInterpolationModule import *
  File "/content/DAIN/my_package/FilterInterpolation/FilterInterpolationModule.py", line 6, in <module>
    from .FilterInterpolationLayer import FilterInterpolationLayer,WeightLayer, PixelValueLayer,PixelWeightLayer,ReliableWeightLayer
  File "/content/DAIN/my_package/FilterInterpolation/FilterInterpolationLayer.py", line 4, in <module>
    import filterinterpolation_cuda as my_lib
ModuleNotFoundError: No module named 'filterinterpolation_cuda'
commented

Hi @alvisanovari -- that error usually means that the module compiled by the previous cells is not available. Make sure to run them all in order. If you keep receiving those issues, consider restarting the notebook and starting from scratch.

commented

PS: I have been able to verify that with the linked notebook the build process doesn't quite finish. I'll work on it later and submit a PR. I'll provide more info here later.

@alvisanovari @AlphaGit
I got submerged by the same error, which my_package incomplete compiled, not only on Colab but also locally.
However, it's awkward that I went successfully many times one month ago.
It makes me suspect if any dependencies (PyTorch/CUDA/GCC related) upgrading causes this failure.
It'll be useful if you take a fresh test-cycle and prove current codes right or wrong with listed dependencies.
Many thanks.

I also tried creating conda env from environment.yml, but this fails too :(

@alvisanovari I managed to get it working after also getting that error. Im still testing and cleaning up the colab, will share it when I'm done.

commented

It seems Google Collab upgraded to pytorch 1.5, which breaks the modules build.

Add this cell and it should work. I still cannot get to submit a right PR, but I was able to give it a try and this, before the build cell, fixes it:

!pip install torch==1.4.0

@AlphaGit Thank you! I can get it to run now! The output video seems to be the same as the input though? The length of the video and the number of frames in the output directory are the same even though the time step was 0.5...shouldn't they have doubled?
Maybe I am missing something.

im getting some errors, like "resize_hotfix" is not defined and others :/

@AlphaGit
Running !pip install torch==1.4.0 before build makes the build somewhat more successful
(
somewhat because I still get errors like (even though they don't stop execution):
./build.sh: line 4: activate: No such file or directory,
a lot of is deprecated [-Wdeprecated-declarations],
and in the end, after best.pth is downloaded successfully:

E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/c/cups/libcupsimage2_2.2.7-1ubuntu2.7_amd64.deb  404  Not Found [IP: 91.189.88.152 80]
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/pulseaudio/libpulsedsp_11.1-1ubuntu7.4_amd64.deb  404  Not Found [IP: 91.189.88.152 80]
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/pulseaudio/pulseaudio-utils_11.1-1ubuntu7.4_amd64.deb  404  Not Found [IP: 91.189.88.152 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

)

Using Tesla P100-PCIE-16GB, 418.67, 16280 MiB, I also get:

in [6] # ffmpeg extract /bin/sh: 1: identify: not found detected, (which I'm not sure is an error, as it finished executing)

in [9] # Interpolation:

Interpolate 9 frames
Traceback (most recent call last):
  File "colab_interpolate.py", line 112, in <module>
    y_s, offset, filter = model(torch.stack((X0, X1),dim = 0))
  (. . .)
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.

Recreating error: I'm running the notebook step by step as instructed.

According to the following settings, I successfully used DAIN on Colab today.

# Install PyTorch 1.4.0 with CUDA 10.0
!pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html
# Then set the softlink to CUDA 10.0
!sudo ln -snf /usr/local/cuda-10.0 /usr/local/cuda
# After that we can perform a complete compilation.

You can also check my repository MVIMP for out-of-the-box DAIN functionality.

According to the following settings, I successfully used DAIN on Colab today.

# Install PyTorch 1.4.0 with CUDA 10.0
!pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html
# Then set the softlink to CUDA 10.0
!sudo ln -snf /usr/local/cuda-10.0 /usr/local/cuda
# After that we can perform a complete compilation.

You are welcome.

I tried that, and it works, thz :)
But for every single frame it interpolates, it spits these into the console output as well:

/pytorch/torch/csrc/autograd/python_function.cpp:622: UserWarning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
/pytorch/torch/csrc/autograd/python_function.cpp:622: UserWarning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
/pytorch/torch/csrc/autograd/python_function.cpp:622: UserWarning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
/pytorch/torch/csrc/autograd/python_function.cpp:622: UserWarning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
/pytorch/torch/csrc/autograd/python_function.cpp:622: UserWarning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
/pytorch/torch/csrc/autograd/python_function.cpp:622: UserWarning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
/pytorch/torch/csrc/autograd/python_function.cpp:622: UserWarning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
/pytorch/torch/csrc/autograd/python_function.cpp:622: UserWarning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
/pytorch/torch/csrc/autograd/python_function.cpp:622: UserWarning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
/pytorch/torch/csrc/autograd/python_function.cpp:622: UserWarning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)

:(

Can somebody make a video tutorial on this please -_-

I dunno how, but it works xD, thanks to you all people :)

commented

@Brokensilence Those are warnings, meaning that it doesn't prevent the code from running. I did take a look at that long time ago and I wasn't able to fix it, and also I didn't find an easy way to hide it. I agree it is a bit noisy.

@0nepixel I'm glad you got it working. The most difficult part is to understand how to operate a notebook, but then, it's just about reading, modifying variables, and running the cells in order. If you'd like to do the video tutorial, we'd very much appreciate it!

My first language is spanish :/, maybe a tutorial step by step using images.

commented

@0nepixel Also mine! =D I lack video editing skills, but if you need someone to narrate over a video, count me in, I do have audio editing experience.

@0nepixel Also mine! =D I lack video editing skills, but if you need someone to narrate over a video, count me in, I do have audio editing experience.
AlphaGit, r u a Godot user?, your nick name sounds so familiar xD

commented

@0nepixel

AlphaGit, r u a Godot user?, your nick name sounds so familiar xD

I am not. If you’d like to continue the conversation you can reach out to me at alphagma@gmail.com — that way we won’t create unnecessary chatter for all other members subscribed to this issue.

See ya around!

@Brokensilence
It's just some standard deprecated user warning of PyTorch, you can easily get rid of it by -W ignore.
Or, you are welcome to see what I do to it in MVIMP.

CAN SOMEONE PLEASE HELP ME?

When I run it Colab version,

Interpolation

%shell mkdir -p '/content/gdrive/My Drive/DAIN/frames-out'
%cd /content/DAIN

!python colab_interpolate.py --netName DAIN --time_step 49.94 --start_frame 1 --end_frame 450 --frame_input_dir '/content/gdrive/My Drive/DAIN/frames-in' --frame_output_dir '/content/gdrive/My Drive/DAIN/frames-out'

I'm encountering with this error;

/content/DAIN
revise the unique id to a random numer 76481
Namespace(SAVED_MODEL=None, alpha=[0.0, 1.0], arg='./model_weights/76481-Wed-May-20-16:48/args.txt', batch_size=1, channels=3, ctx_lr_coe=1.0, datasetName='Vimeo_90K_interp', datasetPath='', dataset_split=97, debug=False, depth_lr_coe=0.001, dtype=<class 'torch.cuda.FloatTensor'>, end_frame=450, epsilon=1e-06, factor=0.2, filter_lr_coe=1.0, filter_size=4, flow_lr_coe=0.01, force=False, frame_input_dir='/content/gdrive/My Drive/DAIN/frames-in', frame_output_dir='/content/gdrive/My Drive/DAIN/frames-out', log='./model_weights/76481-Wed-May-20-16:48/log.txt', lr=0.002, netName='DAIN', no_date=False, numEpoch=100, occ_lr_coe=1.0, patience=5, rectify_lr=0.001, save_path='./model_weights/76481-Wed-May-20-16:48', save_which=1, seed=1, start_frame=1, time_step=49.94, uid=None, use_cuda=True, use_cudnn=1, weight_decay=0, workers=8)
cudnn is used
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/upsampling.py:129: UserWarning: nn.UpsamplingNearest2d is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name))
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/upsampling.py:129: UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name))
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2423: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
Traceback (most recent call last):
File "colab_interpolate.py", line 112, in
y_s, offset, filter = model(torch.stack((X0, X1),dim = 0))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/content/DAIN/networks/DAIN.py", line 165, in forward
cur_offset_output = [cur_offset_outputs[0][0], cur_offset_outputs[1][0]]
IndexError: list index out of range

commented

@xstrauss This error ("IndexError: list index out of range" in DAIN.py), seems to be unrelated to the issue being discussed (Google Colab compatibility). Would you mind to open another issue in GitHub for that? If you can, also include the input video and the configuration you're trying to use. If we're able to reproduce, we might get to the underlying problem.

/EDIT: Nevermind, I just saw you did. It's preferable to mention people instead of spamming with multiple questions -- that is distracting to the contributors involved. I'll check it out and see if I can help.

For readers, the issue reported by @xstrauss is #81.

@xstrauss This error ("IndexError: list index out of range" in DAIN.py), seems to be unrelated to the issue being discussed (Google Colab compatibility). Would you mind to open another issue in GitHub for that? If you can, also include the input video and the configuration you're trying to use. If we're able to reproduce, we might get to the underlying problem.

/EDIT: Nevermind, I just saw you did. It's preferable to mention people instead of spamming with multiple questions -- that is distracting to the contributors involved. I'll check it out and see if I can help.

For readers, the issue reported by @xstrauss is #81.

Sorry but I just need answer.

commented

According to the following settings, I successfully used DAIN on Colab today.

# Install PyTorch 1.4.0 with CUDA 10.0
!pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html
# Then set the softlink to CUDA 10.0
!sudo ln -snf /usr/local/cuda-10.0 /usr/local/cuda
# After that we can perform a complete compilation.

Yes, it works with pytorch 1.4.0 and cuda 10.0.

I was trying to make a DAIN video when executing this command.

Detecting FPS of the input file.

`%shell yes | cp -f /content/gdrive/My\ Drive/{INPUT_FILEPATH} /content/DAIN/

import os
filename = os.path.basename(INPUT_FILEPATH)

import cv2
cap = cv2.VideoCapture(f'/content/DAIN/{filename}')

fps = cap.get(cv2.CAP_PROP_FPS)

if(fps/TARGET_FPS>0.5):
print("Define a higher fps, because there is not enough time for new frames. (Old FPS)/(New FPS) should be lower than 0.5. Interpolation will fail if you try.")`

I got an error saying this. I don't know to code at all

cp: cannot stat '/content/gdrive/My Drive//content/gdrive/My': No such file or directory
cp: cannot stat 'Drive/Testing': No such file or directory
cp: cannot stat 'Files/savetweetvid_EQIBfBDWoAEXDeK.gif': No such file or directory

CalledProcessError Traceback (most recent call last)
in ()
1 # Detecting FPS of input file.
----> 2 get_ipython().magic('shell yes | cp -f /content/gdrive/My\ Drive/{INPUT_FILEPATH} /content/DAIN/')
3
4 import os
5 filename = os.path.basename(INPUT_FILEPATH)

3 frames
/usr/local/lib/python3.6/dist-packages/google/colab/_system_commands.py in check_returncode(self)
136 if self.returncode:
137 raise subprocess.CalledProcessError(
--> 138 returncode=self.returncode, cmd=self.args, output=self.output)
139
140 def repr_pretty(self, p, cycle): # pylint:disable=unused-argument

CalledProcessError: Command 'yes | cp -f /content/gdrive/My\ Drive//content/gdrive/My Drive/Testing Files/savetweetvid_EQIBfBDWoAEXDeK.gif /content/DAIN/' returned non-zero exit status 1.

I also tried another 'DAIN'(below) not the one from Github (Google Colab), but the results seem to be questionable. It appears the output video was just sped up, and I've compared it to the DAIN App result. I'm sorry If I'm a bit confusing, I've been trying to troubleshoot this.

The other DAIN link: https://colab.research.google.com/github/AhabbscienceStudioPak/DAIN/blob/master/DAIN_Colab.ipynb
Other DAIN video results: https://youtu.be/Jry3rBY6Guw

I was trying to make a DAIN video when executing this command.

Detecting FPS of the input file.

`%shell yes | cp -f /content/gdrive/My\ Drive/{INPUT_FILEPATH} /content/DAIN/

import os
filename = os.path.basename(INPUT_FILEPATH)

import cv2
cap = cv2.VideoCapture(f'/content/DAIN/{filename}')

fps = cap.get(cv2.CAP_PROP_FPS)

if(fps/TARGET_FPS>0.5):
print("Define a higher fps, because there is not enough time for new frames. (Old FPS)/(New FPS) should be lower than 0.5. Interpolation will fail if you try.")`

I got an error saying this. I don't know to code at all

cp: cannot stat '/content/gdrive/My Drive//content/gdrive/My': No such file or directory

cp: cannot stat 'Drive/Testing': No such file or directory
cp: cannot stat 'Files/savetweetvid_EQIBfBDWoAEXDeK.gif': No such file or directory
CalledProcessError Traceback (most recent call last)
in ()
1 # Detecting FPS of input file.
----> 2 get_ipython().magic('shell yes | cp -f /content/gdrive/My\ Drive/{INPUT_FILEPATH} /content/DAIN/')
3
4 import os
5 filename = os.path.basename(INPUT_FILEPATH)

3 frames
/usr/local/lib/python3.6/dist-packages/google/colab/_system_commands.py in check_returncode(self)
136 if self.returncode:
137 raise subprocess.CalledProcessError(
--> 138 returncode=self.returncode, cmd=self.args, output=self.output)
139
140 def repr_pretty(self, p, cycle): # pylint:disable=unused-argument

CalledProcessError: Command 'yes | cp -f /content/gdrive/My\ Drive//content/gdrive/My Drive/Testing Files/savetweetvid_EQIBfBDWoAEXDeK.gif /content/DAIN/' returned non-zero exit status 1.

I also tried another 'DAIN'(below) not the one from Github (Google Colab), but the results seem to be questionable. It appears the output video was just sped up, and I've compared it to the DAIN App result. I'm sorry If I'm a bit confusing, I've been trying to troubleshoot this.

The other DAIN link: https://colab.research.google.com/github/AhabbscienceStudioPak/DAIN/blob/master/DAIN_Colab.ipynb Other DAIN video results: https://youtu.be/Jry3rBY6Guw

Try replacing
%shell yes | cp -f /content/gdrive/My\ Drive/{INPUT_FILEPATH} /content/DAIN/"
with
%shell yes | cp -f "{INPUT_FILEPATH}" /content/DAIN/

I think the INPUT_PATH in this case should be the path in your drive, try taking away "/content/gdrive/My Drive" from INPUT_PATH.
Just to remind, use only one of the two solution I give, and I would prefer the second one.

GUYS I HAVE THE SOLUTION FOR THIS PROBLEM

CODE:

Detecting FPS of input file.

%shell yes | cp -f /content/gdrive/MyDrive/input.mp4 /content/DAIN/

import os
filename = os.path.basename(INPUT_FILEPATH)

import cv2
cap = cv2.VideoCapture(f'/content/DAIN/{filename}')

fps = cap.get(cv2.CAP_PROP_FPS)
print(f"Input file has {fps} fps")

if(fps/TARGET_FPS>0.5):
print("Define a higher fps, because there is not enough time for new frames. (Old FPS)/(New FPS) should be lower than 0.5. Interpolation will fail if you try.")

it is showing that :
Looking in links: https://download.pytorch.org/whl/torch_stable.html
ERROR: Could not find a version that satisfies the requirement torch==1.4.0+cu100 (from versions: 1.11.0, 1.11.0+cpu, 1.11.0+cu102, 1.11.0+cu113, 1.11.0+cu115, 1.11.0+rocm4.3.1, 1.11.0+rocm4.5.2, 1.12.0, 1.12.0+cpu, 1.12.0+cu102, 1.12.0+cu113, 1.12.0+cu116, 1.12.0+rocm5.0, 1.12.0+rocm5.1.1, 1.12.1, 1.12.1+cpu, 1.12.1+cu102, 1.12.1+cu113, 1.12.1+cu116, 1.12.1+rocm5.0, 1.12.1+rocm5.1.1, 1.13.0, 1.13.0+cpu, 1.13.0+cu116, 1.13.0+cu117, 1.13.0+cu117.with.pypi.cudnn, 1.13.0+rocm5.1.1, 1.13.0+rocm5.2, 1.13.1, 1.13.1+cpu, 1.13.1+cu116, 1.13.1+cu117, 1.13.1+cu117.with.pypi.cudnn, 1.13.1+rocm5.1.1, 1.13.1+rocm5.2, 2.0.0, 2.0.0+cpu, 2.0.0+cpu.cxx11.abi, 2.0.0+cu117, 2.0.0+cu117.with.pypi.cudnn, 2.0.0+cu118, 2.0.0+rocm5.3, 2.0.0+rocm5.4.2, 2.0.1, 2.0.1+cpu, 2.0.1+cpu.cxx11.abi, 2.0.1+cu117, 2.0.1+cu117.with.pypi.cudnn, 2.0.1+cu118, 2.0.1+rocm5.3, 2.0.1+rocm5.4.2)
ERROR: No matching distribution found for torch==1.4.0+cu100
How to resolve this??