immersive-web / webxr-ar-module

Repository for the WebXR Augmented Reality Module

Home Page:https://immersive-web.github.io/webxr-ar-module

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Decide on the progressive enhancement strategy for WebXR

ddorwin opened this issue · comments

One of the takeaways from immersive-web/webxr#786 and related discussions is that there is no clear agreed upon solution for progressive enhancement / fallback - or even agreement on the importance of this.

As @cwilso summarized in that issue:

In short - the key questions that have arisen appear to me to be ones of the tradeoffs between progressive capability across devices (a core tenet of the Web, I'd point out) vs. ensuring developer control and best presentation on in given environments (clearly desirable). I've heard the concerns about ensuring that we don't provide two ways to create AR content in the short term, and agree this is a concern. I think we can avoid this. I also hear the question about whether VR content should be automatically supported in a see-through AR device, and agree this needs to be a choice left up to the UA. (I'd be personally disappointed if it weren't supported by my device, but concede this is a personal choice.)

While the modes and VR headsets supported by the core spec are fairly straightforward, the introduction of see-through headsets and a new AR mode are an appropriate time to establish a strategy and guidelines for how applications - and user agents - should be implemented to have the best chance of XR content being accessible to as many users on as many clients as possible, regardless of whether the content was designed with the specific form factor in mind.


Related questions include:

  • How much priority does WebXR and related specs place on progressive capability across devices?
    • How should it weigh a potentially poor user experience vs. blocking access to content?
  • Should there be a distinction between immersive VR and AR sessions or does that create more problems? (#28)
  • Is it reasonable to create AR content - and use the same XRSessionMode - for both handheld and headset form factors? (#29)
  • What can the API do to address these issues or alleviate concerns?

It would be good to get the TAG's input on this, especially the first question. /cc @alice

My personal view has (and remains) something like this:

  • we want to make it possible for all content to be accessible on all devices. For access, especially when considering disabilities and economic inequity, this feels non-negotiable.
  • we want developers to be able to request their preferences, but make it possible for users to do SOMETHING on all devices.

Patterns that we can build examples of to guide developers to achieve this:

  • a VR world is divided between "environment" and "content". If drawn on an AR device, hide the content. "Teleporting" and "flying" has the visual effect of "moving the virtual stuff relative to me" instead of "moving me relative to the world." Think of the visual model of the Construct in the Matrix, where content is pulled to Neo and Morpheus, vs moving through content. (I fully understand that some experiences will still want/need to render full screen, such as 360 video, or a building being walked through in an architectural scenario ... find, that stuff is "content" and it will render full screen)
  • an AR world is based off the "world". If drawn in VR, provide a dummy world. Something like Steam Home or Cliff House. Then put the content relative to that. (Eventually, a UA might just decide to offer users the ability to render AR apps in the "VR Home", but that would actually just appear as "AR" to the app, so a different case).
  • provide a set of simple GUI widgets (e.g., dat.gui-xr) that can be used to create simple 2D UIs for apps. A handheld AR app might use a DOM overlay for it's 2D UI; guide it to make simple 2D using alternative, but well defined UIs.
  • provide examples of mapping between tap and ray inputs in AR (handheld and headmount) so devs see how to manage the different cases.

We do not need to create one-size-fits-all-and-solves-all toolkits; we need to provide samples that do the right thing across the scenarios, and then work with framework and tool devs. The most important thing is to have Sumarian, three.js, Babylon, etc., provide solutions, as this is what most devs will use.

In the spirit of Blair's comment, we would experiment with AR-in-VR features for FxR on relatively cheap (eg, Oculus Go / Quest) hardware, but don't see it as a requirement from all UA's.

I think the user would need to turn on such a mode explicitly, in the event that's the only device they own. But the user may decide to block content that is not optimal for their device.
So if a browser is supportive of using less optimal devices for AR or VR content, the user may still choose not to allow it. Therefore, the content still needs to expect to be blocked

To say that we could test our assumptinos for this by exploring how someone would use the Microsoft Accessible Controller with their SteamVR headsets... Should users be able to change their controller pose by means other than physically moving them? Such controllers are coupled with cables matched to devices bespoke to each individual user.

This was discussed a bunch at TPAC and later amongst the editors. It's hard for us to see a concrete outcome of this issue, as opposed to the individual issues about specific cases of progressive enhancement. For now, we should be dealing with those on a case-by-case basis.

@blairmacintyre's comment here is a useful thing to refer back to when such discussions happen again, though.