immersive-web / depth-sensing

Specification: https://immersive-web.github.io/depth-sensing/ Explainer: https://github.com/immersive-web/depth-sensing/blob/main/explainer.md

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Revisit a way to surface depth API to the app?

bialpio opened this issue · comments

Quote from twitter's @mrmaxm:
https://twitter.com/mrmaxm/status/1333516895975305218

"Or look into similar approach of Hand Tracking API with providing allocated array into function, which will fill it with data.
Allocations - is very important issue with realtime apps."

We have few options here:

  1. Keep the API the way it is now & specify that XRDepthInformation is only usable when the frame it came from is active.
  2. Expose a method that will populate app-provided Uint8Array with the current depth data.
  3. (related to #4) Expose depth data via a WebGLTexture. Its lifetime will likely also need to be limited as in pt.1 above.

A bit of background regarding Chrome's current implementation: renderer receives new depth information data on every frame (via a shared memory buffer coming from device's process). The allocation + copy on the device side is unavoidable since we need to have a way of getting the data out of ARCore. The depth data buffer is passed on & stored on the renderer side, and is copied every time an app requests an instance of XRDepthInformation.

More thoughts on the above options and how they could change Chrome's implementation.

  1. Ensuring that the data is only valid during an active XRFrame allows us to skip a copy of the depth buffer when the application is requesting depth information - since instances of XRDepthInformation are usable only when a frame is active, they can now share the underlying depth buffer among themselves (and once the frame becomes inactive, we could reclaim the buffer). Drawback is that it'd mean the app can accidentally overwrite entries in this buffer (and those overwritten entries will be visible via other XRDepthInformation instances).
  2. I believe this will not actually help in our case. Filling out app-provided array would incur a copy of potentially non-trivial amount of data, and would only save on allocating a Uint8Array object (which AFAICT is not that expensive if the array is just a view into another buffer). What this approach would would help with though is that if the app writes to its own array, it would only overwrite its own copy.
  3. Similarly to pt.1 above, with the same drawback - app can upload new data to the texture (but this is harder to do accidentally, so it's worth changing the API just for that benefit).