Jittor / jittor

Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.

Home Page:https://cg.cs.tsinghua.edu.cn/jittor/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Crash with error happend during compilatio in `jt.nn.max_pool3d/max_pool2d`

x0w3n opened this issue · comments

Describe the bug

Crash occurs when passing unexpected values to jittor's jt.nn.max_pool3d/max_pool2d, such as kernel_size=-1,stride=-1, resulting in error happend during compilatio.

Full Log

max_pool2d:

[i 0508 13:41:22.394176 64 compiler.py:956] Jittor(1.3.9.6) src: /jittor/python/jittor
[i 0508 13:41:22.396759 64 compiler.py:957] g++ at /usr/bin/g++(8.3.0)
[i 0508 13:41:22.396815 64 compiler.py:958] cache_path: /root/.cache/jittor/jt1.3.9/g++8.3.0/py3.7.4/Linux-5.15.153x56/13thGenIntelRCx56/8c5a/master
[i 0508 13:41:22.400760 64 __init__.py:412] Found addr2line(2.31.1) at /usr/bin/addr2line.
[i 0508 13:41:22.626378 64 __init__.py:227] Total mem: 15.43GB, using 5 procs for compiling.
[i 0508 13:41:22.699519 64 jit_compiler.cc:28] Load cc_path: /usr/bin/g++
Traceback (most recent call last):
  File "test.py", line 8, in <module>
    a = jt.nn.max_pool2d(x=jt.random((1,3,3,5)),kernel_size=-1,stride=-1,padding=-1) # crash
  File "/jittor/python/jittor/pool.py", line 551, in max_pool2d
    return MaxPool2d(kernel_size, stride, padding, dilation, return_indices, ceil_mode)(x)
  File "/jittor/python/jittor/__init__.py", line 1184, in __call__
    return self.execute(*args, **kw)
  File "/jittor/python/jittor/pool.py", line 541, in execute
    return self._layer(x)
  File "/jittor/python/jittor/__init__.py", line 1184, in __call__
    return self.execute(*args, **kw)
  File "/jittor/python/jittor/pool.py", line 176, in execute
    '''])
RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.ops.code)).

Types of your inputs are:
 self   = module,
 args   = (list, NanoString, list, ),
 kwargs = {cuda_header=str, cuda_src=str, cuda_grad_src=list, cpu_header=str, cpu_src=str, cpu_grad_src=list, },

The function declarations are:
 VarHolder* code(NanoVector shape,  NanoString dtype, vector<VarHolder*>&& inputs={},  string&& cpu_src="",  vector<string>&& cpu_grad_src={},  string&& cpu_header="",  string&& cuda_src="",  vector<string>&& cuda_grad_src={},  string&& cuda_header="",  DataMap&& data={})
 vector_to_tuple<VarHolder*> code_(vector<NanoVector>&& shapes,  vector<NanoString>&& dtypes, vector<VarHolder*>&& inputs={},  string&& cpu_src="",  vector<string>&& cpu_grad_src={},  string&& cpu_header="",  string&& cuda_src="",  vector<string>&& cuda_grad_src={},  string&& cuda_header="",  DataMap&& data={})
 vector_to_tuple<VarHolder*> code__(vector<VarHolder*>&& inputs, vector<VarHolder*>&& outputs,  string&& cpu_src="",  vector<string>&& cpu_grad_src={},  string&& cpu_header="",  string&& cuda_src="",  vector<string>&& cuda_grad_src={},  string&& cuda_header="",  DataMap&& data={})

Failed reason:[f 0508 13:41:22.832308 64 code_op.cc:25] Check failed: (i == 0) ^ (v[i] >= 0)  Something wrong... Could you please report this issue?
 Vary shape should only occur in the first dimension: [1,3,-1,-3,]

max_pool3d:

[i 0508 13:42:09.968239 08 compiler.py:956] Jittor(1.3.9.6) src: /jittor/python/jittor
[i 0508 13:42:09.971688 08 compiler.py:957] g++ at /usr/bin/g++(8.3.0)
[i 0508 13:42:09.971747 08 compiler.py:958] cache_path: /root/.cache/jittor/jt1.3.9/g++8.3.0/py3.7.4/Linux-5.15.153x56/13thGenIntelRCx56/8c5a/master
[i 0508 13:42:09.976544 08 __init__.py:412] Found addr2line(2.31.1) at /usr/bin/addr2line.
[i 0508 13:42:10.198135 08 __init__.py:227] Total mem: 15.43GB, using 5 procs for compiling.
[i 0508 13:42:10.271802 08 jit_compiler.cc:28] Load cc_path: /usr/bin/g++
Traceback (most recent call last):
  File "test.py", line 7, in <module>
    a = jt.nn.max_pool3d(x=jt.random((1,3,3,5,5)),kernel_size=-1,stride=-1,padding=-1) # crash
  File "/jittor/python/jittor/pool.py", line 555, in max_pool3d
    return MaxPool3d(kernel_size, stride, padding, dilation, return_indices, ceil_mode)(x)
  File "/jittor/python/jittor/__init__.py", line 1184, in __call__
    return self.execute(*args, **kw)
  File "/jittor/python/jittor/pool.py", line 548, in execute
    return self._layer(x)
  File "/jittor/python/jittor/__init__.py", line 1184, in __call__
    return self.execute(*args, **kw)
  File "/jittor/python/jittor/pool.py", line 373, in execute
    '''])
RuntimeError: Wrong inputs arguments, Please refer to examples(help(jt.ops.code)).

Types of your inputs are:
 self   = module,
 args   = (list, NanoString, list, ),
 kwargs = {cuda_header=str, cuda_src=str, cuda_grad_src=list, cpu_header=str, cpu_src=str, cpu_grad_src=list, },

The function declarations are:
 VarHolder* code(NanoVector shape,  NanoString dtype, vector<VarHolder*>&& inputs={},  string&& cpu_src="",  vector<string>&& cpu_grad_src={},  string&& cpu_header="",  string&& cuda_src="",  vector<string>&& cuda_grad_src={},  string&& cuda_header="",  DataMap&& data={})
 vector_to_tuple<VarHolder*> code_(vector<NanoVector>&& shapes,  vector<NanoString>&& dtypes, vector<VarHolder*>&& inputs={},  string&& cpu_src="",  vector<string>&& cpu_grad_src={},  string&& cpu_header="",  string&& cuda_src="",  vector<string>&& cuda_grad_src={},  string&& cuda_header="",  DataMap&& data={})
 vector_to_tuple<VarHolder*> code__(vector<VarHolder*>&& inputs, vector<VarHolder*>&& outputs,  string&& cpu_src="",  vector<string>&& cpu_grad_src={},  string&& cpu_header="",  string&& cuda_src="",  vector<string>&& cuda_grad_src={},  string&& cuda_header="",  DataMap&& data={})

Failed reason:[f 0508 13:42:10.419155 08 code_op.cc:25] Check failed: (i == 0) ^ (v[i] >= 0)  Something wrong... Could you please report this issue?
 Vary shape should only occur in the first dimension: [1,3,-1,-3,-3,]

Minimal Reproduce

import jittor as jt
from jittor import *
max_pool3d = jt.nn.max_pool3d(x=jt.random((1,3,3,5,5)),kernel_size=-1,stride=-1,padding=-1) # crash
# max_pool2d = jt.nn.max_pool2d(x=jt.random((1,3,3,5)),kernel_size=-1,stride=-1,padding=-1) # crash

Expected behavior

The main reason for these crashes may be that the same underlying processing function of the pool does not do boundary detection for kernel_size, stride=, etc.