w3c / webcodecs

WebCodecs is a flexible web API for encoding and decoding audio and video.

Home Page:https://w3c.github.io/webcodecs/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

decodeQueueSize/encodeQueueSize is not updated on the task where dequeue event is fired

youennf opened this issue · comments

The usual pattern is to update a read-only attribute value and synchronously fire the corresponding event.
This is not the case for decoders/encoders, I wonder why. Note this means that the timing where the control messages are run is somehow observable to JS.

Also, now that we execute the error callback synchronously as part of the close encoder/decoder algorithm, the corresponding dequeue event is fired in a task after the error callback execution. This might be a bit surprising to applications.

I wonder whether we could consider something along those lines:

  • In case of reset/close with non AbortError, fire the dequeue event synchronously so that it happens just before the error callback execution.
  • In case of AbortError reset/closure, do not fire the dequeue event (the app triggered reset/close so does not really need a dequeue event).
  • In case of close being called by the web application between the time the dequeue event task is scheduled and the time the task runs, do not fire the dequeue event. The same approach could be useful for reset as well but might require introducing more states.

I agree that it makes sense to change the observable values and event emission atomic, which I think means snapshotting the queue size and updating that snapshot when the event is emitted.

What is the reason for firing the dequque event for non-AbortError cases?

After a quick discussion about this, one issue to consider is that JS may be appending a new chunk concurrently with the decode process dequeuing a chunk. It's probably okay for a the enqueue loop to terminate early, but synchronizing the adds/removes with event snapshots may be complicated.

I'll review earlier discussions to see if there was a specific reason the current design was selected.

The original discussion happened primarily in issue #494. It seems that having the queue size not increment in cases where the codec processes the chunk immediately was considered desirable, so that apps could append more immediately.

I'm no longer sure if this is the desired behavior. As noted in the thread, there are codecs that can synchronously process an unlimited number of chunks, and it may be better for apps to release control every so often in in that case.

None of these considerations are affected in close cases, so I don't have any objections to those changes (although I would prefer to never emit a queue size change event once closing starts).

I would prefer to never emit a queue size change event once closing starts.

I do not really see any justification for it atm.
Let's start with this then. @aboba, @padenot, thoughts?

The original discussion happened primarily in issue #494. It seems that having the queue size not increment in cases where the codec processes the chunk immediately was considered desirable, so that apps could append more immediately.

I'm no longer sure if this is the desired behavior.

I'll try to read #494.
In general, I am fuzzy about satured. For instance, how is the saturated check done from the control thread given the spec says that any step referencing [codec implementation] should be via enqueuing in the codec queue?

Would the following sketch work in that new world?

  • Every time encode() is called, increment an internal enqueue-encode-task counter. Also increment the queue size.
  • Every time the encode() corresponding task is being processed (after saturated check), increment an internal dequeue-encode-task counter. Schedule a task to do the following:
    • Compute the new queue size by substracting the internal enqueue-encode-task counter with the internal dequeue-encode-task counter.
    • If the new queue size is negative, abort.
    • If the new queue size is below the queue size, update the queue size and fire the event synchronously.

I guess we could decrement synchronously if not saturated as part of this algorithm, if we deem it appropriate in some specific conditions. Something like: [[control message queue]] is empty at the time we do the saturated check and we are still in the encode method.

how is the saturated check done from the control thread

The parallel queue is not necessarily a thread.

As noted in #494, Chrome's implementation processes the work queue synchronously when work is enqueued by JS. This is observable in two ways:

  1. The queue size can be decremented while work is being enqueued. (Eg. decodeQueueSize doesn't change when calling decode(), or even becomes smaller.)
  2. Flush promises may be resolved synchronously while work is being enqueued (observable by messing with Object.prototype).

(1) was considered desirable in #494, but I don't know how to reconcile it with both the current spec text and JS API norms. (2) can be fixed in Chome and I consider it a quality-of-implementation issue (although I'm not actually sure if this violates the rules of a 'parallel queue').

Proposed in this thread is also (3) queue size changes could be synchronously observable. In this case Chrome's implementation would probably still need to queue the events and apply a snapshotting system like you described to make it look as though the queue is asynchronous.