video-dev / hls.js

HLS.js is a JavaScript library that plays HLS in browsers with support for MSE.

Home Page:https://hlsjs.video-dev.org/demo

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The problem of drawing frame images on the canvas

yzydeveloper opened this issue · comments

Is your feature request related to a problem? Please describe.

None

Describe the solution you'd like

In hls.js, using MediaSource combined with Video to display videos, but is there a way to obtain YUV data for each frame? Then use Canvas and AudioContext to play the video

Additional context

No response

HLS.js does not provide methods for interacting with HTMLMediaElement that are already available as part of the Web API.

HLS.js does not provide methods for interacting with HTMLMediaElement that are already available as part of the Web API.

Is there any other way for hls.js to play on browsers that do not support mse? Using canvas and audiocontext

Is there any other way for hls.js to play on browsers that do not support mse?

HLS.js only uses MSE.

drawing VideoFrame via canvas + AudioFrame (PCM data) using WebAudio does not directly related to hls.js (or any other streaming lib on MSE), and it is achievable througth various way depending on your actual needs. The only problem is you get YUV data only when the frame being rendered, which is suitable for postprocessing.

  1. WebCodec API provides a way to directly get currently rendered VideoFrame from video element const frame = new VideoFrame(HTMLVIdeoElement), then you can use VideoFrame.copyTo with VideoFrame.format and VideoFrame.allocationSize to get YUV data for most of the content (normally 8bit 420 content should be fine)

  2. From (1), WebGPU also allows you to directly import VideoFrame as texture, then you can use simple marix to transform rgb back to YUV to do some custom processing in shader, or directly render the texture into canvas. For normal 2d canvas, you should also be able to directly draw ImageBitmap to canvas.

  3. For audio data, ScriptNode & AudioWorklet of WebAudio API should be enough for you, the data provider can directly be the HTMLVideoElement as well.

Can I obtain a buffer for processing during hls.js decoding?

Can I obtain a buffer for processing during hls.js decoding?

first of all, hls.js or other similar library does not provide "decoding" functionality, video decoding is not something directly exposed to js context by normal approach. MSE on the other hand, also not standalone "decoder", you can think of it as a source provider, for web devloper to customize the way of "streaming" media data to browser.

from your previous descrption I think you are on a wrong track. If device/os does not support MSE, it is likely does not support any of the newer API for decoding/rendering, rely on native hls support probably is your only choice.

And if you really just want to control the decoding/rendering process:

simple solution: no

complex solution: yes, you can build custom MSE and custom HTMLVideoElement using WebCodec or even WASM, as long as you followed MSE spec, then modify hls.js to use your custom modules. In that case hls.js will push remuxed FMP4 segment to your MSE interface, and you can do your work after that (e.g using WebCodec for decoding then output YUV data and manage your own frame buffer, before sending to canvas)