immersive-web / hit-test

Home Page:https://immersive-web.github.io/hit-test/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Using hit test API for floor/bounds in VR mode

klausw opened this issue · comments

PR immersive-web/webxr#493 discusses "Combining virtual and real-world hit testing", the intent appears to be that WebXR API takes care of real-world understanding, while the application is responsible for virtual objects. For consistency, should real-world information provided by VR headsets also be handled by the hit test API?

Most of the reference spaces have a floor plane, and the bounded reference space adds xrReferenceSpace.boundsGeometry which is documented to be a recommended area in which the application should place content to ensure it's reachable. An application could use this information to do its own tests by treating them as virtual objects, but I think this is not very robust, and may lead to applications baking in assumptions that wouldn't be a good fit for advanced systems with better world understanding.

If we were to encourage VR applications to also use the hit test API to calculate intersections with the floor or with the bounds geometry, these applications would potentially be able to work unchanged on more advanced systems such as untethered VR headsets that work in an unbounded space.

It would still be up to the implementation to provide a safety chaperone or similar system to prevent people from running into things, but that tends to be immersion-breaking. I think encouraging use of the hit test API would be useful as a way to provide additional information about real-world geometry within VR applications, for example to show virtual objects as barriers to discourage people from getting too close to real limits.

xrReferenceSpace.boundsGeometry would still be useful as a static mechanism to provide overall bounds for the application, but it doesn't seem suitable as a dynamic mechanism, and "here's a good area to place reachable content" also seems to be a distinct use case from probing real-world boundaries. Typically I'd expect the boundsGeometry to be a simple rectangle or circle. Using it to provide detailed geometry for multiple rooms would be undesirable for privacy. It's also not available in unbounded mode, and an untethered headset with inside-out mapping may not have detailed bounds information available on application startup. The hit test API seems like a better mechanism to handle dynamic world understanding.

A user agent would be free to do hit testing just based on the floor plane and boundsGeometry (if available), including for cases where the user doesn't consent to providing more detailed world geometry to the application, so I think this would not be a large burden for implementations, but it would leave flexibility for using better data when available.

Moving to AR module while we figure out the correct final location for this issue