immersive-web / occlusion

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Below-the-line approach to secure occlusion rendering

rgerd opened this issue · comments

commented

Hi all,

I work in Microsoft's Mixed Reality organization, looking at how to support occlusion for the web on devices like HoloLens using web frameworks like BabylonJS.

I've recently been looking into the diverse set of surface reconstruction methods that each mention their applicability for occlusion:

Overall, there seem to be two main themes: occlusion by geometry (reconstructed mesh) or by depth map (either from a SLAM-like algorithm or by LiDAR). Technically, no matter how we do it we're just rendering values to the depth buffer and doing z-testing.

Following the question posed in this thread, and given that surface reconstruction has improved significantly over the last year, I'm wondering if it makes sense to target a very simple API for doing occlusion that gives the significant added value of passively occluding virtual content without worrying about the implementation details. Basically, just let the browser decide how it wants to deal with assigning alpha in the color buffer or pre-filling the depth buffer, and don't expose any meshes or depth maps solely for the purpose of occlusion.

The idea behind hiding this information would be to avoid many of the privacy issues involved with providing real-world data, as well as allow different systems to implement occlusion as suits their hardware.

However, in order to have both security and performance we would likely need to make the depth buffer write-only. My impression is that we would want to render the relevant depth values for occlusion before handing the framebuffer to the app rendering the scene so that we could have perf wins by discarding fragments in the rasterization step, but that would require us to make sure the pre-filled depth buffer doesn't become a threat vector, as someone could read the depth values and map out the space in which the content was being rendered.

I'd just like to get the ball rolling and see what thoughts people have around these issues.

Adding some people I've seen in related conversations:
@bialpio @thetuvix @cabanier @toji @blairmacintyre @bricetebbs

Hi
There is a feature that I think that most people have to implement when working with augmented reality and that's depth-testing.
IMHO it would be good that WebXR has a "depth testing" feature that we can request on session initialization, so real-world objects occlude the augmented 3d models.

I think a pre-filled depth buffer would always be readable, even if not directly - content could render a series of full-screen quads of different colors from far to near depth levels with the depth test set inverted from the usual (to pass if the fragment is behind the depth buffer value), and then readPixels on the color data.

It seems to me the privacy-preserving approach would be to do it in the XRCompositor, which would discard pixels from the content framebuffer where the content depth buffer value is behind the real-world geometry depth value. The downside is content rendered with depth testing but not depth writing wouldn't benefit from real-world occlusion.

You wouldn't get the performance benefit of early-z culling, but that is likely the price to pay for the enhanced privacy protection. As a (non-evil) content developer I'd still like a mode with full access to that data too, with appropriate permission prompts.