marek-simonik / record3d

Accompanying library for the Record3D iOS app (https://record3d.app/). Allows you to receive RGBD stream from iOS devices with TrueDepth camera(s).

Home Page:https://record3d.app/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

True Depth Camera Artifact at Top of Image and Edge of Object

iowagrade opened this issue · comments

When using Record3D, I am seeing what appears to be an artifact in the depth map at the top of the image and on the side of the main object in my image when streaming to a webpage using the iPhone 15 True Depth camera. I am not seeing the artifact when I use the Lidar camera.

I am attaching the depth map portion of an image from the stream. The image includes a face as the main object in the center. The artifact I am referring to is the dark red on the right side of the face and the dark red at the top of the image. I don't have another camera with True Depth capability to test with and wanted to inquire if this is normal (others are seeing it) and if there is a solution to eliminate these artifacts. Thank you.

ivn_record3dEdge_20240215

Hi,

this is the expected behavior. The dark red areas represent pixels for which the depth camera's firmware was unable to estimate depth value. Such pixels have their float value set to NaN, which gets converted to the maximum possible depth value when using the color map-based depth encoding. Given that Record3D encodes depth into the Hue component of the HSV model, the maximum possible Hue value (1.0) corresponds to red.

The reason why you see the NaNs (the red pixels) at the edges of the image is because of distortion correction between the color camera and the depth camera (the depth image is distorted so that RGB and depth pixels correspond to each other).

I think the NaNs you see around objects exist because of parallax occlusions; the IR camera cannot see some of the IR dots projected into the scene as they can be occluded by objects in the scene (the IR dot projector and the IR camera are some distance apart from each other), hence the camera firmware is unable to estimate a sensible depth value near occlusions.

The reason why you don't see any of these artifacts with LiDAR, is that Apple combines the RGB camera output with the raw LiDAR measurements using AI to produce a hallucinated depth map. See what happens when you record a video in a completely dark room vs in the same room, but with the lights on. In other words: LiDAR depth maps are estimated by a different algorithm than the selfie FaceID camera depth maps are.

I think this is not an problem specific to Record3D (because what you see is the data provided by Apple's APIs), so I think this issue can be closed. However, feel absolutely free to re-open this issue if you would have any questions.

I didn't feel it was specific to Record3D, and I am happy to receive your explanation / thoughts on what is occurring. Thank you for your reply and great work on the topic.