Error: premature close
rlidwka opened this issue · comments
I have some code that worked before, but throws an error now after release of end-of-stream@1.4.2
:
const pump = require('pump')
const through2 = require('through2')
const multistream = require('multistream')
const from2 = require('from2')
function MyStream(files) {
return through2({ writableObjectMode: true }, function write(chunk, encoding, callback) {
setTimeout(function () {
console.log(chunk)
callback()
}, 10);
}, function finish(callback) {
callback()
})
}
let buffer = []
for (let i = 0; i < 100; i++) buffer.push({ id: i })
let ms = multistream.obj([ from2.obj(buffer) ])
//let ms = from2.obj(buffer) // this works
pump(ms, MyStream(), console.log)
//ms.pipe(MyStream(), console.log) // this also works
Outputs:
...
{ id: 72 }
{ id: 73 }
{ id: 74 }
{ id: 75 }
{ id: 76 }
{ id: 77 }
{ id: 78 }
{ id: 79 }
Error: premature close
at onclosenexttick (/tmp/node_modules/end-of-stream/index.js:53:86)
at processTicksAndRejections (internal/process/task_queues.js:75:11)
{ id: 80 }
{ id: 81 }
...
I've narrowed it down to 25b2425. I'm not sure what that code does, maybe it just exposes a bug that was there before.
I use pump
, which uses end-of-stream
as a dependency, that silently upgrades to the last patch version triggering this.
Using node@12.6.0, dependencies are as follows (note that this example is also affected by feross/multistream#51, which is not yet published to npm, so using git version):
yarn add pump@3.0.0 through2@3.0.1 from2@2.3.0 https://github.com/feross/multistream.git#21d96319
Need some help figuring out why it happens, as I'm not very familiar with streams, and don't know what's their correct behavior is.
Try upgrading to the latest version of through2. (Thanks your message helped me diagnose similar issue)
Nevermind. Still seeing the same errors.
In my case another error was occurring, but it got swallowed up and manifested as the "premature close".
Facing the same issue (i think, or somewhat)
As per version 1.4.4:
Line 53 in e104395
The issue goes away for me by removing && !rs.destroyed
there, which apparently indicates i am prematurely destroying the associated stream, now the question is whether it's correct for EOS signalling a premature EOF when data is still pending / being processed (but probably it's safe to assume this is an issue from our side/code, given who's the author of this :))