MicrosoftEdge / WebView2Feedback

Feedback and discussions about Microsoft Edge WebView2

Home Page:https://aka.ms/webview2

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Combining WebView2 with native directx rendering

stefan1000 opened this issue · comments

I'm looking into possible frameworks to renew a win32 application, consisting of a rather simple UI plus directx 9/11 rendering, as sketched here. It is sufficient to just have the rendering displayed, input events can be dispatched using the HTML5 app.

By prototyping this with WebView2 I now realized that it is not possible to have a transparent WebView2, so the initial idea of using a layered approach doesn't seem to work. Looking at microsoft/microsoft-ui-xaml#2992 it seems that this is not coming for WebView2 in the near future.

An alternative would be to do this compositing using WebGL and a shared d3d11 texture (updated from main process and the blitted by WebView2 WebGL)...but also for this it seems that there is no API or extension point that would allow this.

Are there alternatives that I'm not aware of ?

Thx
WebViewLayout

Can you add some details on your scenario:

  1. Are all of the brown boxes HTML? Or just the header part?
  2. Is that header part layered on top of the native stuff?
  3. What are your target platforms? I know we have a team prototyping in Unity using some capture APIs that are only available on more recent Windows versions - I can try and find out more if that's a possibility for you.

We have items on our backlog for transparency, but as noted it's probably not near future (#205). We are also looking into doing offscreen rendering as a way to more easily support DirectX hosting (#20).

  1. Yes, the brown part is one Angular app (header, footer, but also html dialogs, selectboxes, tooltips that can overlap with the dx area)
  2. No, we have a strict layering: top - HTML, bottom - directx, so if WebView2 would keep the transparency it should be normal alpha compositing. Mouse events are not an issue, we can easily capture those on the HTML area and forward to the native part.
  3. x64 Windows 10, ideally also supporting Win7, 2012 R2, and working via RDP or Citirix.

DirectX hosting would definitely help, assuming that the alpha channel of the texture is still intact.

The alternative option I was thinking about is to use a WebGL canvas element, renderer the directx part offscreen, and blit it using ANGLE, but for this we would need something like this here and a native plugin (which is also not available on WebView):
https://github.com/Microsoft/angle/wiki/Interop-with-other-DirectX-code

Oh gotcha, in your image above between the header and footer is still HTML, but transparent to show the native visuals below.

Are you able to split the HTML and use multiple WebViews composed above/with the DirectX? For example, header would be a WebView, and footer would be a separate WebView - there would be no transparent region above the DirectX.

No, I'm not able to split this: this is one single-page angular application, e.g. in the header bar there are controls, like a selectbox, when clicked it will display items overlapping the dx content. Same for message boxes (HTML), tooltips and other stuff.

I am not extremely familiar with DirectX programming, but have you considered as a workaround rendering to a byte array, pulling (or pushing) that data to the webview context, and rendering it to a canvas element?

Native side (C# in this case)

    [ClassInterface(ClassInterfaceType.AutoDual)]
    [ComVisible(true)]
    public class Bridge
    {
        byte[] buffer;
        int height, width;

        public void Initialize(int height, int width)
        {
            this.height = height;
            this.width = width;
            buffer = new byte[height * width * 4];
        }

        public byte[] GetImage()
        {
            var b = (byte)(DateTime.Now.Millisecond * byte.MaxValue / 1000);

            for(var y = 0; y < height; y++)
            {
                for(var x = 0; x < width; x++)
                {
                    var i = (y * width + x) * 4;

                    buffer[i] = (byte)x;
                    buffer[i + 1] = (byte)y;
                    buffer[i + 2] = b;
                    buffer[i + 3] = 255;
                }
            }

            return buffer;
        }
    }

and WebView2 side:

<body>
    <canvas height="256" width="256"></canvas>
    <script>
        const canvas = document.querySelector('canvas');
        const ctx = canvas.getContext('2d');

        const height = 256;
        const width = 256;

        chrome.webview.hostObjects.sync.bridge.Initialize(height, width);

        requestAnimationFrame(animate);

        function animate() {
            const data = chrome.webview.hostObjects.sync.bridge.GetImage();
            const pixels = new Uint8ClampedArray(data);

            const imageData = new ImageData(pixels, width, height);
            ctx.putImageData(imageData, 0, 0);

            requestAnimationFrame(animate);
        }
    </script>
</body>

And it creates a means of "rendering from native into the DOM" with support for composed HTML overlays. Obviously with a "fancier" system, you could be more clever and instead of pulling and re-drawing the entire canvas on every frame, could "push" repaint rectangles down as well, or dispatch an event via the PostWebMessage event to tell the JS side when to repaint.

image

Hi Nicholas,

thanks for the code, streaming the whole image as RGB is probably not a solution for our setup (e.g. 4K resolution >100fps) and synchronizing both render engines is also not trivial. It may be an option to just stream the texture data to the browser and use WebGL on the browser part to do the rest....have to look into this.

But looking at the other related issues and comments I still think that WebView2 GPU-compositing with alpha channel support would allow many applications to just use their native backend and combine it with an HTML-UI framework.

oh yikes, 4K! yeah just kidding, dont do anything i suggested, above!

pixel copying / drawing certainly has performance implications, and absent another approach the best you could do would be frame down-sampling or advanced logic to efficiently identify repaint regions; both of which would impact experience or add complexity.

for sure, proper compositing is needed long-term, though im not super-optimistic given the long running history of similar issues that were never addressed (e.g. WPF and hosted content airspace), but who knows, maybe "this time it's different"! 🤞

@stefan1000 did you find a way to do this?

@Ivshti Yes, with recent version of webview2 it is possible to make the background transparent. We can run dx9 in the back and HTML UI in the front, just as sketched in the first post. Using win32 (ICoreWebView2Environment).

           COREWEBVIEW2_COLOR wvColor;
            wvColor.R = 255;
            wvColor.G = 255;
            wvColor.B = 255;
            wvColor.A = 0;
            
            wil::com_ptr<ICoreWebView2Controller2> controller2 = webviewController.query<ICoreWebView2Controller2>();
            controller2->put_DefaultBackgroundColor(wvColor);

@stefan1000 thanks, but can you please share a more complete code example? I'm trying a similar thing with opengl underneath and it doesn't seem to work: with .A = 0, the webview just retains artifacts from the first moments the window rendered in the background (including the opengl canvas, but as the canvas draws new stuff, they don't appear). When I hide the webview2, the opengl canvas is rendering properly.

@Ivshti let me check the source, my colleague that did the prototype is currently OOO.