nodejs / node

Node.js JavaScript runtime ✨🐢🚀✨

Home Page:https://nodejs.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

zlib deflate results in a memory leak

jasonmcaffee opened this issue · comments

  • Version:6.7.0
  • Platform:Darwin ITs-MacBook-Pro.local 15.6.0 Darwin Kernel Version 15.6.0: Thu Jun 23 18:25:34 PDT 2016; root:xnu-3248.60.10~1/RELEASE_X86_64 x86_64
  • Subsystem:zlib

I'm using the graylog2 package for logging, and we ran into significant memory leak issues.
After tracking it down, I found zlib.deflate is the source of the issue.
The issue is magnified when running code inside of docker with the latest node distribution.

Running the below code on my macbook pro results in the memory spiking to ~3GB, then released down to 600MB.
Running the code in the latest node docker distro results in memory spiking to ~3GB, and it is never released.

let zlib = require('zlib');

let message = {
  some:"data"
};
let payload = new Buffer(JSON.stringify(message));

for(var i =0; i < 30000; ++i){
  zlib.deflate(payload, function (err, buffer) {
  });
}

setTimeout(()=>{}, 2000000);

This has resulted in our docker containers crashing due to memory exhaustion.

Are you sure the node version is the same between the two?

Yes. I've also recreated the issue on v4.5 and 5.12.

commented

@jasonmcaffee have you experimented with the sync version zlib.deflateSync? I see the same thing with the async version but the sync version seems more tame

The loop is creating 30,000 concurrent zlib.deflate requests that are handed off to a threadpool; i.e., it's creating a giant backlog. They are dispatched eventually and that is why memory goes down again on OS X.

On Linux (or more precisely, with glibc), something else happens. When you run it through strace -cfe mmap,munmap,mprotect,brk you can see what is going on:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 99.63    3.803199          32    119985           brk
  0.20    0.007680          12       632           mmap
  0.12    0.004748           7       675           munmap
  0.05    0.001882          30        62           mprotect
------ ----------- ----------- --------- --------- ----------------
100.00    3.817509                121354           total

Node.js uses malloc/new and glibc translates those overwhelmingly to brk system calls. The problem is that the "program break" goes up but a few allocations with longer lifetimes make it impossible for the break to go down again. There is no real memory leak, it's ordinary - but rather catastrophic - memory fragmentation.

IOW, can confirm but node.js is not the root cause. Maybe we can sidestep it by using a custom allocator like jemalloc but that has a lot of ramifications, it's not something we can do lightly.

Thanks @bnoordhuis for the breakdown and explanation.

FWIW, we just fixed a problem that had similar symptoms, and our solution was to disable transparent huge pages. I'd encourage you to investigate and see if a) they are turned on, and b) if disabling them fixes your problem.

Could this be an issue if multiple clients are triggering async deflate calls within a few ms of each other? Should people just avoid deflate() all together?

Does this go much further than just deflate()? Could this be affecting most async calls which allocate memory on Linux?

Does anyone know which versions of node do not have this bug? We are seeing this on latest Node8 but also on all versions of 6 that I sampled.

I run the code snippet above in node invoked with --expose-gc and call global.gc() to free the memory but 3gb is still resident no matter what I do. This is killing us in production as we have to restart our containers every couple of hours.

@TylerBrock Did you read through this issue and the linked issues? If you did, then you already know the answer.

I did but I'm still not sure so I apologize if I missed something.

People have suggested it could be new zlib, new v8, or something in crypto (possibly related to zlib) but going back to versions of node having older zlib and v8 (checked via process.versions) yielded similar results.

Would you mind summarizing where we are at?

@TylerBrock

From #8871 (comment):

There is no real memory leak, it's ordinary - but rather catastrophic - memory fragmentation.

IOW, can confirm but node.js is not the root cause. Maybe we can sidestep it by using a custom allocator like jemalloc but that has a lot of ramifications, it's not something we can do lightly.

In other words, this is not a Node.js, or zlib, or V8 issue, but rather caused by how the system memory allocator works.

Disabling transparent huge pages (or setting it to madvise) may also help.

Must be something different on my end because THP is already set to madvise. Thanks for the clarification.

@TylerBrock Did you come to any solution? Disabling THP on our end gave us a little bit of headroom, but it's to the point where a machine blows every day or so due to use of permessage-deflate in a websocket lib. We'd rather not disable compression but are running out of viable options.

@STRML We just went back to node 6. Things run for months and months without leaking memory.

It's a bummer though, I'd love to use node 8. It's much faster, more memory efficient (when not leaking), and has async/await.

What I don't understand is, why is memeory fragmentation an issue now. Did the allocation strategy or allocator change between 6 and 8?

Did the allocation strategy or allocator change between 6 and 8?

Not in Node – possibly in V8, but I don’t think it would have effects like this.

Fwiw, I couldn’t reproduce measurable differences in behaviour/memory usage using the original issue comment script here.

That's interesting @TylerBrock, I wouldn't have expected the version to matter. I see the same results in all versions I tested (>= 4).

I have noticed that deflateSync doesn't cause nearly the same problems, topping out at about 280MB vs 3GB for deflate(). It seems the application side could actually mitigate this problem somewhat by limiting concurrency on deflate().

commented

Whether a solution is found to this problem?
We observe fragmentation when using zlib.deflate()

Ubuntu 16.04 / Node 9.4.0

@NicoBurno It's mentioned in the comments: disable THP.

@bnoordhuis That does not fix the issue in our testing. See also websockets/ws#1202 and #15301

If the issue is #8871 (comment), you can try tweaking glibc's malloc heuristics; see the Environment variables section in http://man7.org/linux/man-pages/man3/mallopt.3.html.

This issue, specifically #8871 (comment), has been plaguing ws for a very long time to the point that yesterday, we received a suggestion to replace all async deflate calls with their sync equivalents.

Should we, at least, add a note in the documentation to make people aware of the memory fragmentation issue?

I'm not against but it's not exclusively a zlib issue; on the other hand, zlib might be the easiest way to hit it.

Any chance some of the people who are having issues here would be able to spare some cycles to benchmark/test what's in #21973? (I don't know what else would be a plausible path forward to solving/closing this.)

Any chance some of the people who are having issues here would be able to spare some cycles to benchmark/test what's in #21973? (I don't know what else would be a plausible path forward to solving/closing this.)

As far as this specific issue goes, migrating to jemalloc wouldn't help at all. The testcase 6 of #21973 (which is specifically the example in this issue) shows that jemalloc uses more memory and is slower still. No improvements whatsoever.

So … a few thoughts:

  • Looking into this, brotli doesn’t have this problem. It also consumes quite a bit of memory, but not in the same way that zlib does.
  • The ideal bug fix is for the caller to not create a ton of zlib instances in one go.
    • We could work with this on the Node.js side, and limit the number of concurrently active zlib instances (because the threadpool limits activity anyway)?
  • We could also work around this by extending the allocation algorithm for zlib. Since this is a platform-specific problem, maybe a platform-specific solution would be okay? (e.g. mmap()ing about the amount of memory that zlib needs instead of using malloc()?)

Fwiw, I’ve played around with a (naïve) mmap()-based allocator: https://gist.github.com/addaleax/2e3b6f83168e7d756340eb616c92e61f

The upside is that it fixes the memory fragmentation, the downside is that it slows down creating zlib objects by ~60 %, which seems a bit too much to me?

Nice work @addaleax. Have you tried benchmarking using the script in websockets/ws#1202 to see how it really performs vs. various concurrency levels?

@STRML It looks like it would remove the memory fragmentation issue there as well.

The upside is that it fixes the memory fragmentation, the downside is that it slows down creating zlib objects by ~60 %, which seems a bit too much to me?

FWIW, I've experimented with mmap/VirtualAlloc-based allocators for Node.js over the years and you're describing the same thing I kept seeing, that mmap() pretty much always loses out on brk().

You can shrink (but probably not close) the gap by mmap-ing a large reservation with PROT_NONE and then parceling it out with mprotect().

Having looked a bit more into this:

At least locally, the big overhead caused by mmap()-based allocation is coming from a) the munmap() calls (surprisingly, to me) and b) the page faults while first accessing the data.

(I’ve also tried to look into grouping the allocations performed for a zlib instances into a single mmap() call, but that requires looking into zlib’s internals and doesn’t give us all that much, because mmap() isn’t actually the expensive part here.)

a) is rather easily solved by outsourcing the calls to a background thread, but b) is trickier. So far, the best thing I’ve found is managing available pages through a freelist, but that has the obvious downside of keeping memory lying around until it is used.

So, if I want to dig deeper into this, it would help to have some more real-world data on how the problem manifests, and in particular what allocation patterns the zlib instances follow. I doubt that it’s actually 30k instances all created synchronously right after each other, as in the original example here? I assume it’s more of a “one zlib instance per connection” thing?

Thanks @addaleax for looking into this. The real world scenario that caused this problem to manifest was all log messages in our web service code ended up going through the zlib.deflate function call. We were using a library called graylog2, which used the deflate call before sending the log message to our graylog server. What we observed is that in our docker instances, the memory was never released, so every few weeks we'd be forced to restart the instances. The example with 30k calls to zlib.deflate in a row just shows how to easily replicate the problem.
Hope that helps!

I assume it’s more of a “one zlib instance per connection” thing?

Yes, this is how the issue manifests in ws. There is one independent deflate stream per connection. A synchronous loop is then used to send some data to all connection.

Any chance someone with experience with this issue could put this into a little more context for JS lay-persons? This issue is referenced in the [very popular] ws module's README, so I expect quite a few folks who are JS-literate but not well versed in low-level linux memory management are landing here. (I being at least one such person 😄) It's clear from the commentary here there's a theoretical issue at the 30,000-concurrrent request level, but it's unclear at what point this becomes a real-world concern.

Some specific questions:

  • What happens if a server is only making "a few" concurrent zlib.deflate() calls instead of the 30,000 in the SSCCE? (I.e. what's the correlation between concurrency level and degree of fragmentation?)
  • Why is this an issue with websockets but not with vanilla HTTP response compression?

I'm also seeing what I believe to be this issue simply when using a deflate stream to compress more than a few 100 MB of data.

System: Linux <hostname> 3.16.0-0.bpo.4-amd64 #1 SMP Debian 3.16.39-1+deb8u1~bpo70+1 (2017-02-24) x86_64 GNU/Linux
Node: v10.15.0 (interestingly, I was not seeing this behavior with v9.6.0 on the same machine, so something changed)
Can provide glibc version if needed.

Symptom: when piping an HTTP response stream to a deflate stream and then to an FS write stream, as more and more data is received, reported memory usage for the program grows, leading to eventual killing by the kernel oom-killer after 100-200 MB of data received, compressed, and written. If I implement a simple Transform steam that either calls zlib.deflate( chunk, callback ) or const result = zlib.deflateSync( chunk ); callback( null, result );, I can observe memory usage growing with the deflate() version, whereas it is flat with the deflateSync() version. The growth is slower than in the built-in stream case due to the naive implementation lacking any buffering and thus being slower, but the deflateSync() version processed over 1 GB of data with flat memory usage before I killed the test.

Yes, there's a warning that hints at this at the top of the zlib docs page now, but it's not nearly strongly worded enough IMO. This library is not at all suitable for serious production usage for anyone processing any sort of reasonable size data, and the docs need to more clearly outline the scenarios that can lead to the catastrophic memory results (long running apps with many deflates, or streaming large amounts of data through).

Using a threadpool internally to get around the lack of multi-threading and allow presenting CPU-bound synchronous APIs as async is a reasonable design choice, but the current implementation clearly has issues with the way it uses memory. Given the way node responds to code hogging the event loop when it's under load (random network timeouts etc) I suspect that just using deflateSync() in a transform stream isn't going to work super well and instead I'm going to have to offload this processing to a separate worker process, wrapping the details in a transform stream.

FWIW, with worker threads in Node 12, to work around this, it really makes sense from a userland point of view to create a small threadpool for this and delegate to it, having the threadpool run deflateSync() internally but presenting an async Promise-based API to the caller. It would be a pretty simple thing to write a library for and instead require('zlib-threaded').

commented

Bump on this issue, it's Oct. of '19 (zeitgeist: node v13.0.1 / Centos kernel v4.18). Is there any work on the worker thread implementation, or any other WIP?

this problem is hurting my game..memory stays 700mb stable all day except for like 3 times a day were suddenly it starts rising to some ±3gb and cpu's go full tilt making the server unresponsive for some minutes then back to normal like nothing happened except there are messages lost and affects ongoing games...have been hunting for a solution for the past 2months..literally gave up...if someone is able to help me out I'm ready to pay of course

@Senglean Can you explain how you established it's this specific issue and not something else?

Switching to jemalloc might fix it. apt-get install libjemalloc1 and then start node like this:

env LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1 node app.js

If you spawn child processes, make sure they also inherit that LD_PRELOAD environment variable.

@bnoordhuis thanks for your feedback! I will follow your instructions and let you know.

I came to this conclusion from heap dumps.

for the record it happened again the server was running good constant low cpu usage and memory never exceeding 700mb...then is started to go nuts for few seconds and went back down to 90mb ..this is a video of the event

https://www.youtube.com/watch?v=gMctGII26eY&feature=youtu.be

this time node server crashed with this error >

BadRequestError: request aborted
IncomingMessage.onAborted (node_modules/raw-body/index.js:231:10)
at IncomingMessage.emit (events.js:315:20)
at abortIncoming (_http_server.js:532:9)
at socketOnClose (_http_server.js:525:3)
at TLSSocket.emit (events.js:327:22)
at net.js:674:12
at Socket.done (_tls_wrap.js:574:7)
at Object.onceWrapper (events.js:422:26)
at Socket.emit (events.js:315:20)
at TCP. (net.js:674:12)

usually it didn't crash... also noticed I can turn off the nodejs process after turning on it continues

htop shows nodejs process on top causing this ... anyone can tip where to go from here? or if this behaviour was ever observed thanks

in the heap dumps i can see an increase of the following
Screenshot 2020-06-05 at 10 03 54 PM

@Senglean The memory fragmentation that's being discussed in this issue is not something you'd see reflected in heap dumps. You probably have an ordinary memory leak (i.e., resource leak) somewhere in your application. Try opening an issue over https://github.com/nodejs/help/issues.

Thank you I will!

FWIW, taking heap snapshots can create RSS spikes very similar to what's being discussed in this issue. See #33790 for more details (and there too switching to jemalloc is a good workaround.)

Only if I may, is this the way to start pm2 using jemalloc because unfortunately it didn't make a difference

LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2 NODE_ENV=production pm2 start server.js

I appreciate

I wonder if an optional output Buffer argument would be of any help with this issue? Theoretically, it would give users more control over the off-heap allocation rate. I'm talking of something like this:

// if the `output` is large enough, it's used;
// otherwise, a new Buffer is allocated
zlib.deflate(payload, (err, buffer, bytesWritten) => {}, output);

@puzpuzpuz I might be wrong, but I don’t think the Buffers are the issue here as much as it are the zlib internals.

(This is judging from my earlier comment above: #8871 (comment) – we can fix the memory fragmentation problem by making zlib allocate its memory through mmap(). The patch there does not affect how we allocate output buffers.)

@addaleax thanks. That's interesting. I'll check the patch.

@addaleax you were right. The issue is related with memory allocated per Deflate instance, rather than with Buffers. It can be seen in the following script:

const zlib = require('zlib');

const deflates = [];
for (var i = 0; i < 30000; ++i) {
  deflates.push(new zlib.createDeflate());
}

setTimeout(() => { }, 2000000);

This script allocates around 2.5GB RSS on my machine. On the other hand, if only a single stream is used, the RSS is around 42MB:

const zlib = require('zlib');

const message = {
  some: "data"
};
const payload = Buffer.from(JSON.stringify(message));

const deflateStream = new zlib.createDeflate();
deflateStream.on('data', console.log);
deflateStream.on('finish', () => console.log('finished'));

for (var i = 0; i < 30000; ++i) {
  deflateStream.write(payload);
}
deflateStream.end();

setTimeout(() => { }, 2000000);

Did a small experiment with lazy initialization of zlib stream and it seems to help with keeping the RSS around 200MB instead of 2.7GB on the original script. The code is as ugly (and buggy) as it can be, but hopefully the idea is clear enough: puzpuzpuz@b4be58d

I'm not sure if that approach makes any sense, but maybe it could help to lower the footprint in a lot of implicit zlib stream allocations scenario, just like it was done with zlib.deflate() in the original script.

@puzpuzpuz It’s certainly worth experimenting with – does it fix the issue here for you? If yes, I’d say go for a PR :)

@addaleax yes, it seems to mitigate the issue (see #8871 (comment)). According to my experiment, each zlib instance occupies around 87KB when the original script is run. I'll try to polish the patch and submit a fix PR.

I've submitted #34048 which implements lazy zlib initialization, when possible. Behavior of the fix for the original script is described here: #8871 (comment)

The difference with the latest master may be clearly seen on this variation of the script:

const zlib = require('zlib');

const message = {
  some: "data"
};
const payload = Buffer.from(JSON.stringify(message));

setInterval(() => {
  for (let i = 0; i < 30000; ++i) {
    zlib.deflate(payload, () => { });
  }
}, 2000);

With master I get 16GB RSS after about 1 min of execution and it keeps slowly growing. With the fix RSS fluctuates, but never goes beyond 4GB. OS info: Linux apechkurov-laptop 4.15.0-108-generic #109-Ubuntu SMP Fri Jun 19 11:33:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Any feedback, testing on other platforms and code reviews are appreciated.

Closing this one as #34048 has landed. Please let me know if you still face the same behavior, so I could reopen the issue.

@puzpuzpuz It seems that #34048 only mitigates the issue but does not fix it. Am I right?

I'm using this test on Ubuntu 18.04

'use strict';

const util = require('util');
const zlib = require('zlib');

const deflate = util.promisify(zlib.deflate);
const payload = Buffer.from(JSON.stringify({ some: 'data' }));

async function testLeak() {
  const promises = [];

  for (let i = 0; i < 30000; i++) {
    promises.push(deflate(payload));
  }

  await Promise.all(promises);
}

async function main() {
  console.time('testLeak');

  for (let i = 0; i < 10; i++) {
    await testLeak();
  }

  console.timeEnd('testLeak');
}

main()
  .then(function () {
    setTimeout(function () {
      console.log('Memory:', process.memoryUsage());
    }, 10000);
  })
  .catch(console.error);

and the memory usage grows linearly with the number of times testLeak() is run.

@puzpuzpuz It seems that #34048 only mitigate the issue but does not fix it. Am I right?

@lpinca That may be true, as I did some experiments with the fix applied to the reproducer script, but would appreciate any feedback from the community. I'm going to reopen this issue, if necessary, once we get some feedback. Feel free to reopen it now, if you think that's necessary.

The fix seems to be decreasing the fragmentation, but it doesn't get rid of it completely. So, probably it's correct to say that it "mitigates" the issue, rather than "fixes" it.

I've also tried to run your snippet on the latest master:

$ ./node zlib.js 
testLeak: 15.255s
Memory: {
  rss: 540753920,
  heapTotal: 12722176,
  heapUsed: 2812944,
  external: 495556,
  arrayBuffers: 17590
}

And on b0b52b2 (which doesn't include the fix):

$ ./node zlib.js 
testLeak: 19.915s
Memory: {
  rss: 8469770240,
  heapTotal: 4333568,
  heapUsed: 2677264,
  external: 495098,
  arrayBuffers: 17590
}

The RSS is noticeably lower with the fix, yet it's still relatively large when compared with the heap size.

Yes, I get similar results on Linux.

With the mitigation patch

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 93.15   54.458683          37   1466699           mprotect
  5.55    3.242945         303     10698           munmap
  1.10    0.641740         113      5679           mmap
  0.20    0.119330          27      4353           brk
------ ----------- ----------- --------- --------- ----------------
100.00   58.462698               1487429           total

Without the mitigation patch

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 61.95    0.636388          11     59694           brk
 20.25    0.208015          27      7767           munmap
 16.07    0.165120           8     20336           mprotect
  1.73    0.017728           5      3915           mmap
------ ----------- ----------- --------- --------- ----------------
100.00    1.027251                 91712           total

On Windows the memory usage is stable at ~70 MB with or without the mitigation fix. I should try with jemalloc on Linux. I think it makes sense to reopen.

Yes, I get similar results on Linux.

I was also using Ubuntu 18.04 with glibc 2.27.

I think it makes sense to reopen.

OK, reopening it then.

I should try with jemalloc on Linux.

@lpinca I've tried that with jemalloc 3.6.0-11 and here is the output:

$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1 ./node zlib.js
testLeak: 13.778s
Memory: {
  rss: 334114816,
  heapTotal: 131715072,
  heapUsed: 103527424,
  external: 472453668,
  arrayBuffers: 471975702
}

I should try with jemalloc on Linux.

@lpinca I've tried that with jemalloc 3.6.0-11 and here is the output:

$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1 ./node zlib.js
testLeak: 13.778s
Memory: {
  rss: 334114816,
  heapTotal: 131715072,
  heapUsed: 103527424,
  external: 472453668,
  arrayBuffers: 471975702
}

Try using - zlib-bug-inflate

@puzpuzpuz

That's without the mitigation patch right? I also tried with jemalloc and there is an order of magnitude difference (~40 MB with the patch and ~400 MB without it).

@lpinca no, that's with the patch. I may be doing something wrong, as I don't see an order of magnitude difference.

Nvm it seems it was me that did something wrong:

luigi@ubuntu:~/leak$ node -v
v14.7.0
luigi@ubuntu:~/leak$ env LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1 node index.js
testLeak: 57.830s
Memory: {
  rss: 46489600,
  heapTotal: 4251648,
  heapUsed: 2202536,
  external: 998285,
  arrayBuffers: 17590
}
luigi@ubuntu:~/leak$ nvm use 14.6.0
Now using node v14.6.0 (npm v6.14.6)
luigi@ubuntu:~/leak$ node -v
v14.6.0
luigi@ubuntu:~/leak$ env LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1 node index.js
testLeak: 1:24.327 (m:ss.mmm)
Memory: {
  rss: 49336320,
  heapTotal: 3989504,
  heapUsed: 2189736,
  external: 997652,
  arrayBuffers: 17590
}

It seems jemalloc is a working workaround.

It seems jemalloc is a working workaround.

@lpinca you probably mean that jemalloc works more or less like #34048 when used with pre-14.7.0 versions of node, right? At least, that's how it looks like.

Better than that, I mean that with jemalloc there is no leak at all.

luigi@ubuntu:~/leak$ node -v
v14.7.0
luigi@ubuntu:~/leak$ node index.js
testLeak: 1:11.205 (m:ss.mmm)
Memory: {
  rss: 700317696,
  heapTotal: 132943872,
  heapUsed: 101497112,
  external: 493479181,
  arrayBuffers: 492498486
}
luigi@ubuntu:~/leak$ env LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1 node index.js
testLeak: 57.279s
Memory: {
  rss: 49631232,
  heapTotal: 9494528,
  heapUsed: 2323328,
  external: 998285,
  arrayBuffers: 17590
}
luigi@ubuntu:~/leak$ nvm use 14.6
Now using node v14.6.0 (npm v6.14.6)
luigi@ubuntu:~/leak$ node -v
v14.6.0
$ env LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1 node index.js
testLeak: 1:24.564 (m:ss.mmm)
Memory: {
  rss: 51523584,
  heapTotal: 4513792,
  heapUsed: 2273040,
  external: 997652,
  arrayBuffers: 17590
}

Better than that, I mean that with jemalloc there is no leak at all.

Strictly speaking memory fragmentation is not a leak and the allocator should be able to get rid of it when the allocation rate gets lower and fewer allocated memory chunks remain. If jemalloc doesn't have a noticeable fragmentation in the discussed scenario, that's great.

As a summary, it would be great to get some feedback from the community on v14.7.0 (which includes #34048) and v14.7.0 (or any previous version) with jemalloc.

Yes "leak" is not the correct term in this case. My point is that with jemalloc memory usage remains pretty much stable regardless of how many times the testLeak() function in the above example is run. This is not true with glibc even with #34048 applied.

commented

How does this look in 2023 with Node 18+ ? There's no comments on this issue for 3 years so I'm wondering if it's safe to use permessage-deflate to gzip websockets and was pointed to this issue.

I've run the test from #8871 (comment)
on my machine with the following results:

  • Win 10
  • Ryzen 7 1800X
  • 32GB RAM
testLeak: 4:14.889 (m:ss.mmm)
Memory: {
  rss: 95129600,
  heapTotal: 5390336,
  heapUsed: 3328040,
  external: 1524293,
  arrayBuffers: 18659
}

During the test the process memory usage consistently oscillated between around 600 MB and 6,4 GB.