rfilkov / AzureKinectUnityFree

Azure Kinect Examples for Unity (free version)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unproject return zeros

Zulex opened this issue · comments

commented

Hi!

Thanks a lot for making this code public, really useful in a research project!

I noticed that in 'DepthSensor' all the 'unproject' methods return zero. Is that intended or only the case for the free version?
Also, side question: I couldn't find a way (also in the forums) to adjust brightness (to calibrate multiple camera's), any plans on adding that?).

Thanks a bunch and keep up the great work!

Hi, and sorry for the delayed response! I'm quite busy lately, especially during the workdays.

I'm not quite sure what unproject-methods you mean. If you looked at the methods of DepthSensorBase, yes - some of them may return zeros, but they're virtual methods overridden by the sensor-specific interfaces (e.g. Kinect4AzureInterface), and they don't return 0s.

Currently I don't have any plans to add the camera control features. I only plan to refresh this asset a bit, so it can work with the newer versions of Unity.

commented

No worries about the delayed response, appreciate it a lot!

I found the override/static functions indeed. The purpose of this deepdive was to create the point cloud on a remote client that is not attached to the kinect. If you have any hints in how I can bypass the search of local connected kinects, would be highly appreciated! (Instantating the shaders with the sensordata is the main problem now)

Found that I can just copy the functions in kinect4azure and they only need the intrinsics and extrinsics, thanks for the hint!

I'm just not sure how the coordmapper.TransformTo3D (or 2D) works exactly. As far as I can tell from the C# scripts, it doesn't really need a camera attached. Does that mean I can create an empty 'calibration' object and just reference that?

I do understand this is a different question, so feel free to close this issue if you want too.

I'm not quite sure what your use case is. If you need streaming from a remote camera (connected to some other machine), please e-mail me and I can send you the full package to try out this functionality.

You can see how k4a_calibration_2d_to_3d() works in the sources of the Azure-Kinect-SDK. They are publicly available. Generally, it's a standard un-projection for given pixel and distance + some additional correction. 3d-to-2d is the opposite - projection of a 3d point onto a plane.