ivy-llc / vision

3D Vision functions with end-to-end support for deep learning developers, written in Ivy.

Home Page:https://unify.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

# inverse warp rendering

devsugun opened this issue · comments


ValueError Traceback (most recent call last)
/tmp/ipykernel_33/3014420539.py in
13
14 # depth validity
---> 15 depth_validity = ivy.abs(depth1_wrt_f2 - depth2_warp_to_f1) < 0.01
16
17 # inverse warp rendering with mask

ValueError: operands could not be broadcast together with shapes (512,512,1) (262144,1)

I like to get some help resolving the issue with the depth_validity, the article I read says when the size or shape are the same you get this issue, so you have to reduce the values,

HERE:

depth1 = ivy.array(np.reshape(np.frombuffer(cv2.imread(
data_dir + '../input/vision/ivy_vision_demos/rt/depth1.png', -1).tobytes(), np.float32), img_dims + [1]))
depth2 = ivy.array(np.reshape(np.frombuffer(cv2.imread(
data_dir + '../input/vision/ivy_vision_demos/rt/depth2.png', -1).tobytes(), np.float32), img_dims + [1]))

i came across this article refencing the issue with my model not accepting the depth value or not being able to process with the depth value provided.

the article says that:

In order to perform broadcasting it internally follow some rules to convert a small-sized array into the shape of a large array. So whenever an error was thrown check the below-mentioned rules to modify the size of the array for successful broadcasting.

i came across this article refencing the issue with my model not accepting the depth value or not being able to process with the depth value provided.

the article says that:

In order to perform broadcasting it internally follow some rules to convert a small-sized array into the shape of a large array. So whenever an error was thrown check the below-mentioned rules to modify the size of the array for successful broadcasting.

ANY IDEAS PLZ