openvinotoolkit / mmdetection

OpenVINO Training Extensions Object Detection

Home Page:https://github.com/opencv/openvino_training_extensions

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error while export pretrained model to OpenVINO

snowhou opened this issue · comments

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. The bug has not been fixed in the latest version.

Describe the bug
When I try to export pretrained model(faster_rcnn_r50_fpn_1x_20181010-3d1b3351.pth in MODEL_ZOO.MD) to OpenVINO, I got the following error

[ ERROR ]  Cannot infer shapes or values for node "560".
[ ERROR ]  There is no registered "infer" function for node "560" with op = "Resize". Please implement this function in the extensions.

Reproduction

  1. What command or script did you run?
python tools/export.py /project/ev_sdk/src/otedetection/configs/faster_rcnn_r50_fpn_1x.py /home/faster_rcnn_r50_fpn_1x_20181010-3d1b3351.pth /project/ev_sdk/model openvino
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
    no, just the faster_rcnn_r50_fpn_1x.py in configs
  2. What dataset did you use?
    COCO

Environment

  1. Please run python tools/collect_env.py to collect necessary environment infomation and paste it here.
sys.platform: linux
Python: 3.6.10 (default, Dec 19 2019, 23:04:32) [GCC 5.4.0 20160609]
CUDA available: True
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 10.1, V10.1.243
GPU 0: GeForce GTX 1080 Ti
GCC: gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
PyTorch: 1.4.0
PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - NNPACK is enabled
  - CUDA Runtime 10.1
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
  - CuDNN 7.6.3
  - Magma 2.5.1
  - Build settings: BLAS=MKL, BUILD_NAMEDTENSOR=OFF, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Wno-stringop-overflow, DISABLE_NUMA=1, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF, 

TorchVision: 0.5.0
OpenCV: 4.2.0
MMCV: 0.2.16
MMDetection: 1.0rc1+d1d8011
MMDetection Compiler: GCC 5.4
MMDetection CUDA Compiler: 10.1

Error traceback
If applicable, paste the error trackback here.

/usr/local/ev_sdk/src/otedetection/mmdet/apis/inference.py:40: UserWarning: Class names are not saved in the checkpoint's meta data, use COCO classes by default.
  warnings.warn('Class names are not saved in the checkpoint\'s '
/usr/local/ev_sdk/src/otedetection/mmdet/models/detectors/base.py:147: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  imgs_per_gpu = int(imgs[0].size(0))
/usr/local/ev_sdk/src/otedetection/mmdet/models/anchor_heads/rpn_head.py:67: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert rpn_cls_score.size()[-2:] == rpn_bbox_pred.size()[-2:]
/usr/local/ev_sdk/src/otedetection/mmdet/core/utils/misc.py:95: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  k = torch.tensor([k], dtype=torch.long)
/usr/local/ev_sdk/src/otedetection/mmdet/core/utils/misc.py:105: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  values, keep = torch.topk(x, n, dim=dim, **kwargs)
/usr/local/ev_sdk/src/otedetection/mmdet/core/bbox/transforms.py:51: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  min_val = torch.as_tensor(min, dtype=dtype, device=device)
/usr/local/ev_sdk/src/otedetection/mmdet/core/bbox/transforms.py:56: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  max_val = torch.as_tensor(max, dtype=dtype, device=device)
/usr/local/ev_sdk/src/otedetection/mmdet/core/post_processing/bbox_nms.py:64: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  for i in range(num_classes):
/usr/local/ev_sdk/src/otedetection/mmdet/core/post_processing/bbox_nms.py:66: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if not cls_inds.any():
/usr/local/ev_sdk/src/otedetection/mmdet/core/post_processing/bbox_nms.py:69: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if multi_bboxes.shape[1] == 4:
/usr/local/ev_sdk/src/otedetection/mmdet/ops/nms/nms_wrapper.py:50: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if dets_th.shape[0] == 0:
/usr/local/ev_sdk/src/otedetection/mmdet/core/post_processing/bbox_nms.py:85: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if bboxes.shape[0] > max_num:
/usr/local/ev_sdk/src/otedetection/mmdet/core/bbox/transforms.py:193: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if bboxes.size(0) > 0:
/usr/local/ev_sdk/src/otedetection/mmdet/core/bbox/transforms.py:51: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  min_val = torch.as_tensor(min, dtype=dtype, device=device)
/usr/local/ev_sdk/src/otedetection/mmdet/core/bbox/transforms.py:56: TracerWarning: torch.as_tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  max_val = torch.as_tensor(max, dtype=dtype, device=device)
/usr/local/ev_sdk/src/otedetection/mmdet/core/post_processing/bbox_nms.py:64: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  for i in range(num_classes):
/usr/local/ev_sdk/src/otedetection/mmdet/core/post_processing/bbox_nms.py:66: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if not cls_inds.any():
/usr/local/ev_sdk/src/otedetection/mmdet/core/utils/misc.py:95: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  k = torch.tensor([k], dtype=torch.long)
/usr/local/ev_sdk/src/otedetection/mmdet/core/utils/misc.py:105: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  values, keep = torch.topk(x, n, dim=dim, **kwargs)
/usr/local/lib/python3.6/dist-packages/torch/onnx/symbolic_helper.py:246: UserWarning: You are trying to export the model with onnx:Resize for ONNX opset version 10. This operator might cause results to not match the expected results by PyTorch.
ONNX's Upsample/Resize operator did not match Pytorch's Interpolation until opset 11. Attributes to determine how to transform the input were added in onnx:Resize in opset 11 to support Pytorch's behavior (like coordinate_transformation_mode and nearest_mode).
We recommend using opset 11 and above for models using this operator. 
  "" + str(_export_onnx_opset_version) + ". "
ONNX model has been saved to "/project/ev_sdk/model/faster_rcnn_r50_fpn_1x.onnx"
mo.py --input_model="/project/ev_sdk/model/faster_rcnn_r50_fpn_1x.onnx" --mean_values="[123.675, 116.28, 103.53]" --scale_values="[58.395, 57.12, 57.375]" --output_dir="/project/ev_sdk/model" --output="boxes,labels" --input_shape="[1, 3, 800, 800]" --reverse_input_channels
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /project/ev_sdk/model/faster_rcnn_r50_fpn_1x.onnx
        - Path for generated IR:        /project/ev_sdk/model
        - IR output name:       faster_rcnn_r50_fpn_1x
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        boxes,labels
        - Input shapes:         [1, 3, 800, 800]
        - Mean values:  [123.675, 116.28, 103.53]
        - Scale values:         [58.395, 57.12, 57.375]
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       True
ONNX specific parameters:
Model Optimizer version:        2020.1.0-61-gd349c3ba4a
[ ERROR ]  Cannot infer shapes or values for node "560".
[ ERROR ]  There is no registered "infer" function for node "560" with op = "Resize". Please implement this function in the extensions. 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #37. 
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <UNKNOWN>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "560" node. 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38. 

Thank you for using the issue template and providing all of that information!

OpenVINO 2020.2 is required, while 2020.1 is the one you use according to the provided log.

Thank you,when change to OpenVINO 2020.2,the problem is solved. Nice work!