geaxgx / depthai_hand_tracker

Running Google Mediapipe Hand Tracking models on Luxonis DepthAI hardware (OAK-D-lite, OAK-D, OAK-1,...)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

custom_models convert error - `Cannot infer shapes or values for node "If_38"`

759401524 opened this issue · comments

I wanted to implement custom_model according to your instructions, but the following error occurred when converting to openvino

How did you solve it, could you help me?

[ ERROR ]  Cannot infer shapes or values for node "If_38".
[ ERROR ]  There is no registered "infer" function for node "If_38" with op = "If". Please implement this function in the extensions. 
 For more information please refer to Model Optimizer FAQ, question #37. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=37#question-37)
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <UNKNOWN>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "If_38" node. 
 For more information please refer to Model Optimizer FAQ, question #38. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=38#question-38)

image

I didn't get this error. Maybe we don't have the same version of the tools used by generate_post_proc_onnx.py.
I am realizing that the requirements.txt in custom_models directory is missing in my github repo. Sorry for that.
Below is the requirements.txt content:

torch >= 1.9
onnx >= 1.10
onnx-simplifier
--extra-index-url https://pypi.ngc.nvidia.com
onnx_graphsurgeon

Can you create this file like above, then execute : python3 -m pip install -r requirements.txt

If you still get an error, can you tell me the versions of: torch, onnx, onnx-simplifier, onnx_graphsurgeon (with pip list).

I rebuilt a virtual environment with conda and installed requirements according to your suggestion, but the same problem still occurs
Here the versions

Package           Version
----------------- --------
flatbuffers       2.0
numpy             1.21.4
onnx              1.10.2
onnx-graphsurgeon 0.3.14
onnx-simplifier   0.3.6
onnxoptimizer     0.2.6
onnxruntime       1.9.0
opencv-python     4.5.1.48
Pillow            8.4.0
pip               21.3.1
protobuf          3.19.1
setuptools        59.1.1
six               1.16.0
torch             1.10.0
torchvision       0.11.1
typing_extensions 4.0.0
wheel             0.37.0

Thank you.
I was able to reproduce the issue by using the same versions as you. A short term workaround is to downgrade torch.
I am using:
torch 1.9.0
torchvision 0.10.0

Check if issue already exists
==>my issue is already exists while google it on github luxonis depthai.

Describe the bug
==> we are runing [depthai_hand_tracker] models by using visual studio platform.
when we are detecting our single hand very closed within 40cm. it will shut down immedeately and no any detection. then we have to foced to clsoed models .
we found out Fatal error , Please report to developers. Log: 'ResourceLocker' '358'

Minimal Reproducible Example Append the MRE to the bug report
https://www.google.com/search?q=Fatal+error+%2C+Please+report+to+developers.+Log%3A+%27ResourceLocker%27+%27358%27+&ei=LJmWY5-IJ7uHr7wP6_KFqAQ&ved=0ahUKEwifzoWWi_P7AhW7w4sBHWt5AUUQ4dUDCA8&uact=5&oq=Fatal+error+%2C+Please+report+to+developers.+Log%3A+%27ResourceLocker%27+%27358%27+&gs_lcp=Cgxnd3Mtd2l6LXNlcnAQAzoKCAAQRxDWBBCwAzoECAAQQzoFCAAQgARKBAhBGABKBAhGGABQkQFY2WxgwndoAXABeACAATeIAV-SAQEymAEAoAEBoAECyAEKwAEB&sclient=gws-wiz-serp)%5D(https://github.com/luxonis/depthai/issues/602)%5D(https://github.com/luxonis/depthai/issues/602

Expected behavior
A clear and concise description of what you expected to happen.
==> while the hand is very closing to OAK-D PRO on "brighter" background slowly, it will shut down immediately and no any detection , insteade if on darker or complex backgroud don't.

Screenshots
206943682-cdb6352c-d145-406a-8b65-ab6a3e38801f

Pipeline Graph
none

Attach system log
Provide output of [0803_hand_tracker_demo.py] with [hand_landmark_080_sh4.blob] and-[hand_landmark_full_sh4.blob] and [hand_landmark_lite_sh4.blob ]

Additional context
[0803_hand_tracker_demo.py] as below source code:

import HandTrackerRenderer, HandTrackerEdge

import board

import digitalio

import cv2
import tkinter as tk
import tkinter.ttk as ttk
from PIL import Image, ImageTk, ImageFont, ImageDraw, Image
import time

red_o = digitalio.DigitalInOut(board.C0)

red_o.direction = digitalio.Direction.OUTPUT

red_o.value = False

tracker_args = {"use_same_image" : True}

tracker = HandTrackerEdge.HandTracker(
input_src = None,
use_lm = True,
use_world_landmarks = False,
use_gesture = True,
xyz = False,
solo = True,
crop = False,
resolution = "full",
stats = True,
trace = 0,
use_handedness_average = True,
single_hand_tolerance_thresh = 5,
lm_nb_threads = 2,
**tracker_args)

renderer = HandTrackerRenderer.HandTrackerRenderer(
tracker = tracker,
output = None)

# GUI

wd = tk.Tk() # 建立[根視窗]

wd.title("Riko Machine Vision - HV-R100 - 手部安全防護") # 建立視窗名稱

wd.iconbitmap("Riko.ico") # 建立視窗 icon

w = 1920

h = 1080

x = -10

y = 0

wd.geometry("%dx%d+%d+%d" %(w, h, x, y)) # 設定視窗大小及位置( + 左 & 上;- 右 & 下)

pws = ttk.PanedWindow(wd, orient = tk.HORIZONTAL) # 全視窗父容器

pws.pack( fill = tk.BOTH, expand = True) # expand 是否填滿視窗

#======================================================================================

#※ 左邊的畫面

# 左邊的父容器

pw = ttk.PanedWindow(pws, orient = tk.VERTICAL)

pws.add(pw, weight = 2)

label_1 = tk.Label(pw, bg = "pale green", width = 1920, height = 1080) # 畫面影像容器

pw.add(label_1, weight = 2)

#======================================================================================

#※ 右邊的資訊列

# 右邊的父容器

pw2 = ttk.PanedWindow(pws, orient = tk.VERTICAL)

pws.add(pw2, weight = 2)

#-----------------------------------------------------------------------------------

# 數據列容器

bg_lf1 = "sky blue"

labelframe_1 = tk.LabelFrame(pw2, bg = bg_lf1, width = 160, height = 700)

pw2.add(labelframe_1, weight = 2)

組別

min_d_label_1 = tk.Label(labelframe_1, bg = bg_lf1, text = "手部安全防護 ", width = 10,

font = ("Times", 16, "bold"))

min_d_label_1.grid(padx = 3, pady = 2, row = 0, column = 0, sticky = tk.E + tk.W)

min_d_label_4 = tk.Label(labelframe_1, bg = bg_lf1, width = 2, font = ("Times", 24, "bold"))

min_d_label_4.grid(padx = 3, pady = 2, row = 1, column = 2, sticky = tk.E + tk.W)

# 最近距離

min_d_label_1 = tk.Label(labelframe_1, bg = bg_lf1, text = "最近距離 : ", width = 8,

font = ("Times", 24, "bold"))

min_d_label_1.grid(padx = 3, pady = 2, row = 2, column = 0, sticky = tk.E + tk.W)

min_d_label_2 = tk.Label(labelframe_1, bg = bg_lf1, width = 8, font = ("Times", 24, "bold"))

min_d_label_2.grid(padx = 3, pady = 2, row = 3, column = 0, sticky = tk.E + tk.W)

min_d_label_3 = tk.Label(labelframe_1, bg = bg_lf1, width = 8, font = ("Times", 24, "bold"))

min_d_label_3.grid(padx = 3, pady = 2, row = 3, column = 1, sticky = tk.E + tk.W)

min_d_label_4 = tk.Label(labelframe_1, bg = bg_lf1, width = 8, font = ("Times", 24, "bold"))

min_d_label_4.grid(padx = 3, pady = 2, row = 4, column = 1, sticky = tk.E + tk.W)

# fps

fps_label_1 = tk.Label(labelframe_1, bg = bg_lf1, text = "FPS : ", width = 8,

font = ("Times", 24, "bold"))

fps_label_1.grid(padx = 3, pady = 2, row = 5, column = 0, sticky = tk.W )

fps_label_2 = tk.Label(labelframe_1, bg = bg_lf1, fg = "orange red", width = 8,

font = ("Times", 24, "bold"))

fps_label_2.grid(padx = 3, pady = 2, row = 5, column = 1, sticky = tk.E + tk.W)

while True :

i_number, frame, hands, bag = tracker.next_frame()
print("i_number = ", i_number)

if frame is None:
    break
frame = renderer.draw(frame, hands, bag)

# frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# imgtk = ImageTk.PhotoImage(image = Image.fromarray(frame))
# label_1.imgtk = imgtk
# label_1.config(image = imgtk)
# label_1.update()
# key = cv2.waitKey(1)


key, frame = renderer.waitKey(delay = 1)
cv2.imshow("Riko Machine Vision - HV-R100", frame)
# frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# imgtk = ImageTk.PhotoImage(image = Image.fromarray(frame))
# label_1.imgtk = imgtk
# label_1.config(image = imgtk)
# label_1.update()


if key == 27 or key == ord('q') :
    break

renderer.exit()
tracker.exit()

Hi,
Why are you using this old issue instead of creating a new one ?
What depthai version are you using ? If not the latest (2.19.1), can you upgrade ?
Are you using the latest version of my repo ? If not, can you upgrade ?
Does the problem occurs every time you bring the hand very close to the camera ?
Do you reproduce the problem when using the original demo.py of my repo ? ./demo.py -e -s -g

Hi geaxgx:

Because we have found original repo on depthai_hand_tracker
at that moment.

we conform that depthai version is latest (2.19.1.0) as attached picture:
depthai

We seem to use the old of your repo, could you please tell us
where is the latest version to download??

The problem occurs every time while my hand is closing to camera.

We will try the original repo demo.py if it reproduced the problem

all the best~~

The title of the issue you are writing in is titled "custom_models convert error - Cannot infer shapes or values for node "If_38"".
If you post your issue her, it is because you think that your own issue is related to the "custom_models convert error". But it does not seem so. So you should have created a new issue dedicated to your problem in this repo.

The latest commit of this repo is from May 15 2022. If you have cloned or downloaded this repo since that date, you should have the latest version. Otherwise git pull or download it again and try ./demo.py -e -s -g and check if you reproduce the problem.

Dear geaxgx:

We are bad to create this issue on the wrong title.
and thanks for your latest repo.
we will git pull or download it again.
By the way, we seem to find out depthai version is must be lower than 2.19.1.0
We have testing 2.13 or 2.15 with hand tracker. there is no any problem and running very smoothly.

all the best~~

@Undertaker7533967 I was facing the same problem, downgraded depthai to 2.15 and it worked. Thanks!