immersive-web / depth-sensing

Specification: https://immersive-web.github.io/depth-sensing/ Explainer: https://github.com/immersive-web/depth-sensing/blob/main/explainer.md

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Exposing confidence levels through the API

bialpio opened this issue · comments

The current API shape does not expose confidence levels even though both ARKit and ARCore provide this information, in 2 different ways:

  • in ARKit, the confidence map is a separate attribute available on ARDepthData
  • in ARCore, the confidence values are packed in 3 most significant bits of depth buffer entries, moreover, they are currently documented as being equal to 000s.

It seems to me that the cleanest way to expose confidence levels via the depth sensing API is to add a separate attribute to XRDepthInformation (an Uint8Array?). This would mean that for ARCore implementation, we'd need to make sure to always copy 3 most significant bits of depth buffer entries into a separate buffer (if we decide to expose this information), and mask them in the depth buffer to always be set to 0 in case the underlying ARCore code changes. It would also mean that we'd have to surface this as 1 more opaque WebGLTexture so that it's available for GPU consumption (assuming that we want to optimize for GPU access, see my comment). Additionally, for devices that support depth, but do not offer confidence values, it would be simple to just return null to communicate this.

If the above sounds acceptable, the current API shape allows us to add the separate attribute once that's something that's needed & I'm not worried. Alternatively, if we were to pack confidence levels in the depth buffer entries (ARCore-style), this is something we need to decide on now to ensure we're not making breaking changes later.

Closing the issue, it makes no sense to follow the underlying SDK's API and try and pack the data since this results in less clean API shape.