geaxgx / depthai_hand_tracker

Running Google Mediapipe Hand Tracking models on Luxonis DepthAI hardware (OAK-D-lite, OAK-D, OAK-1,...)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Rotate

rexn8r opened this issue · comments

commented

Hi @geaxgx

great work !!!

I wondering if you could help.

I am trying to run the demo.py with -90 degree rotation of the camera.

I added following lines of code in the HandTracker.py;

rgbRr = dai.RotatedRect()
rgbRr.center.x, rgbRr.center.y = cam.getPreviewWidth() // 2, cam.getPreviewHeight() // 2
rgbRr.size.width, rgbRr.size.height = cam.getPreviewHeight(), cam.getPreviewWidth()
rgbRr.angle = -90
manip.initialConfig.setCropRotatedRect(rgbRr, False)

and also following lines;

if not self.use_previous_landmarks:
# Send image manip config to the device
cfg = dai.ImageManipConfig()
# We prepare the input to the Palm detector
#cfg.setResizeThumbnail(self.pd_input_length, self.pd_input_length)
rgbRr = dai.RotatedRect()
rgbRr.center.x, rgbRr.center.y = self.pd_input_length // 2, self.pd_input_length // 2
rgbRr.size.width, rgbRr.size.height = self.pd_input_length, self.pd_input_length
rgbRr.angle = -90
cfg.setCropRotatedRect(rgbRr, False)

self.q_manip_cfg.send(cfg)

But the view is not rotated and also the hand tracking stops.

I did try to change the Video Size/Preview Size to 1072 , 1072 (divisible by 16) but no luck.

any pointer would be appreciated.

I would also be very interested in this rotation feature.
I tried this:

# HandTrackerEdge.py

# Rotate color frames by 90°
rotated_rect = dai.RotatedRect()
rotated_rect.center.x, rotated_rect.center.y = self.img_w // 2, self.img_h // 2
rotated_rect.size.width, rotated_rect.size.height = self.img_h, self.img_w
rotated_rect.angle = 90
manip_rgb = pipeline.create(dai.node.ImageManip)
manip_rgb.initialConfig.setCropRotatedRect(rotated_rect, False)
cam.preview.link(manip_rgb.inputImage)

[...]

# Then replaced the 2 following `cam.preview.link` by the output of the rotation:
manip_rgb.out.link(pre_pd_manip.inputImage)
[...]
manip_rgb.out.link(pre_lm_manip.inputImage)

Unfortunately, I got the following error:

[1844301091BC331300] [20.1] [2.124] [ImageManip(1)] [error] Output image is bigger (2239488B)
than maximum frame size specified in properties (1048576B) - skipping frame. Please use the
setMaxOutputFrameSize API to explicitly config the [maximum] output size.

I tried to comment out the following lines:

pre_pd_manip.setMaxOutputFrameSize(self.pd_input_length*self.pd_input_length*3)
[...]
pre_lm_manip.setMaxOutputFrameSize(self.lm_input_length*self.lm_input_length*3)

Any idea?