PINTO0309 / PINTO_model_zoo

A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.

Home Page:https://qiita.com/PINTO

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error while create postprocess for onnx yolov7

HoangTienDuc opened this issue · comments

Issue Type

Bug

OS

Ubuntu

OS architecture

x86_64

Programming Language

Python

Framework

TensorFlow, TensorFlowLite

Model name and Weights/Checkpoints URL

Model name: yolov7
https://github.com/PINTO0309/PINTO_model_zoo/tree/18e75913cb97eefa2d8fe9f87b86841407b4c826/307_YOLOv7/post_process_gen_tools

Description

Hi PINTO, your work is awesome.
I try to replicate your work to post-process my custon yolov7 model.
First, i tried to follow the instruction from your model zoo yolov7 post-process without any change, but with your original postprocess docker (1.1.5, 1.1.15), i cannot "update_model_dims.update_inputs_outputs_dims".
Tried with diffirent dockers, tf2onnx, tensorflow version, but it is still not work.
Can you pls give me some advice? Thanks

Relevant Log Output

2023-08-19 04:45:55,743 - INFO - Using tensorflow=2.13.0, onnx=1.14.0, tf2onnx=1.12.1/b6d590
2023-08-19 04:45:55,743 - INFO - Using opset <onnx, 11>
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
2023-08-19 04:45:55,764 - WARNING - Error loading model into tflite interpreter: _get_tensor_details() missing 1 required positional argument: 'subgraph_index'
2023-08-19 04:45:55,770 - INFO - Optimizing ONNX model
2023-08-19 04:45:55,784 - INFO - After optimization: Cast -1 (1->0), Const -2 (6->4), Identity -1 (1->0)
2023-08-19 04:45:55,785 - INFO - 
2023-08-19 04:45:55,785 - INFO - Successfully converted TensorFlow model saved_model_postprocess/nms_score_gather_nd.tflite to ONNX
2023-08-19 04:45:55,785 - INFO - Model inputs: ['serving_default_input_1:0', 'serving_default_input_2:0']
2023-08-19 04:45:55,785 - INFO - Model outputs: ['PartitionedCall:0']
2023-08-19 04:45:55,785 - INFO - ONNX model is saved at nms_score_gather_nd.onnx
INFO: Finish!
INFO: Finish!
INFO: Finish!
INFO: Finish!
OUTPUT_MODEL_PATH:  nms_score_gather_nd.onnx
model.graph.input:  [name: "serving_default_input_1:0"
type {
  tensor_type {
    elem_type: 1
    shape {
      dim {
        dim_value: 1
      }
      dim {
        dim_value: 80
      }
      dim {
        dim_value: 5040
      }
    }
  }
}
, name: "serving_default_input_2:0"
type {
  tensor_type {
    elem_type: 7
    shape {
      dim {
        dim_param: "unk__10"
      }
      dim {
        dim_value: 3
      }
    }
  }
}
]
input_dicts:  {'gn_scores': ['1', '80', '5040'], 'gn_selected_indices': ['N', '3']}
output_dicts:  {'final_scores': ['N', '1']}
input_dims:  {'gn_scores': ['1', '80', '5040'], 'gn_selected_indices': ['N', '3']}
Traceback (most recent call last):
  File "make_input_output_shape_update.py", line 75, in <module>
    updated_model = update_model_dims.update_inputs_outputs_dims(
  File "/home/user/.local/lib/python3.8/site-packages/onnx/tools/update_model_dims.py", line 88, in update_inputs_outputs_dims
    input_dim_arr = input_dims[input_name]
KeyError: 'serving_default_input_1:0'

URL or source code for simple inference testing code

No response

An error occurred during the conversion because the specification of sor4onnx, a homebrew tool, was changed. Committed to fix convert_script.txt.

This script is still buggy as it has never been maintained since it was first created on a trial basis.

The final output is Y1X1Y2X2.

image

In addition, the binding bug of final_box_nums was adjusted.

image

x1y1x2y2 -> y1x1y2x2
image

In the Concat just before NMS, we need to generate both the Y1X1Y2X2 tensor for NMS and the X1Y1X2Y2 for the final output, but it is a year old script and I have not modified it because it is too much trouble.

One more point. If you don't add Sqrt here, the score should be oddly small, less than half.

image

All corrected.
image

Duplicate result from an image to other images

Hi @PINTO0309 , I convert the model from onnx to tensorrt, then use tensorrt code to make prediction.

  • For model yolov7_post_256x320.onnx provided from you, it works perfectly
  • For my custom model. Input size is 256x320, after using "post_process_gen_tools" I got the same model as you. However, after testing with a series of images, I got an error that duplicated results from one image to another.

For example:


image_path /data/data_video/24/pgie/images/0_10-8-2023-17-46_11_109380.png, batch_pred: [[array([149,  66, 207, 156], dtype=int32), array([175, 290, 213, 320], dtype=int32), array([110, 291, 141, 319], dtype=int32)], [0.9104384, 0.91341376, 0.894918], [0, 0, 0]]

**************************************************

image_path /data/data_video/24/pgie/images/0_10-8-2023-17-46_17_72020.png, batch_pred: [[array([154, 136, 201, 189], dtype=int32), array([175, 290, 213, 320], dtype=int32), array([110, 291, 141, 319], dtype=int32)], [0.98416233, 0.91341376, 0.894918], [0, 0, 0]]

In the two images above, both detected two objects with a confidence score of 0.91341376, 0.894918 in the image. However, in reality, the two images above have no such object, this result is the result of a few previous images.

My sample code is saved at gitlab, please take a look

I don't understand what you are talking about.

I will not look into it unless you provide me with test images and models.

It is no surprise that merging single-batch post-processing into a multi-batch main model will corrupt the output.

image