ricea / compressstream-explainer

Compression Streams Explained

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Controlling flushing

tyoshino opened this issue · comments

It depends on the internal state and the input, but when a new chuck is written to a CompressStream, the CompressStream may have 2 choices, buffering some input bytes until it can generate a full compressed byte, or flushing the buffered data (e.g. by performing the "sync flush" of Zlib).

Let's suppose that we want to stream a sequence of JSON objects through a CompressStream and DecompressStream (or some non-web decompressor) at the other peer where there could be some latency between objects. It's good if the receiver side could start processing new chunks as soon as possible. However, without an API to instruct the stream to or not to flush, CompressStream needs to decide whether or not to flush by itself.

  • Flushing for every chunk would be inefficient, but could be acceptable. Not sure
  • Always not flushing would lead to latency. Some chunks would be blocked at the CompressStream for a while until the next chunk comes to kick the buffered data
  • Flushing with a time out. What's reasonable timeout?
  • Queuing a microtask to flush data once at least 1 chunk is written, might be an option.

In terms of efficiency, this might be negligible. For compatibility POV, maybe worth investigating.

If this kind of usage is just out of scope, it's ok. Never mind :) Then, I suggest that you discuss it in the explainer or somewhere more appropriate.

I had overlooked this point. I was only thinking about getting maximum compression, not minimising latency.

I am thinking that adding an option to flush after each chunk might be the simplest way forward. I don't plan to support it in the first version, but I will mention it in the explainer.

Nice to see you again!

Yeah, having such an option sounds good.

I added some mention of this issue in #4. PTAL.

It's been a while, Hirano-san, Adam-san!

Sounds reasonable not to work on this for the 1st version. The change LGTM.

Feel free to just close this issue for now, or leave it open to track discussion. Up to you. It was just a drive-by FYI comment. Not as a stakeholder :)