jetty / jetty.project

Eclipse Jetty® - Web Container & Clients - supports HTTP/2, HTTP/1.1, HTTP/1.0, websocket, servlets, and more

Home Page:https://eclipse.dev/jetty

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Memory leak in `ArrayRetainableByteBufferPool$RetainedBucket`

KGOH opened this issue · comments

Jetty version(s)
11.0.17

Java version/vendor
openjdk version "21.0.3" 2024-04-16 LTS
OpenJDK Runtime Environment Temurin-21.0.3+9 (build 21.0.3+9-LTS)
OpenJDK 64-Bit Server VM Temurin-21.0.3+9 (build 21.0.3+9-LTS, mixed mode, sharing)

OS type/version
Inside of a eclipse-temurin:21-alpine docker container

Description
On several (not all) different instances of my application that are using the same functionality I observe a constantly growing direct memory usage
image
(the saw-tooth pattern is due to manual application restarts)

I created a heap dump using jmap -dump:format=b,file=heap.hprof $PID.
I inspected it with VisualVM and Eclipse MAT and both point to org.eclipse.jetty.io.ArrayRetainableByteBufferPool$RetainedBucket

Here are screenshots from VisualVM and Eclipse MAT from the dump inspection:
image
image
image
image

How to reproduce?
🤷

Memory leaks are notoriously hard to track down, and sometimes hard to differentiate from normal (but large) memory consumption. In your MAT screenshot, I can see that the retained bucket has almost 1.3 million buffer entries in it, so I'm fairly positive that there is indeed some form of leak somewhere.

Unfortunately, almost everything in Jetty works with buffers so just knowing that something, somewhere is not always releasing them isn't nearly enough information to stand a chance to track down the bug. We are going to need your help here to narrow down the areas that may contain the leaky code.

The key to identify what's causing the leak is to find some correlation between certain request types and the growth of memory usage, then try to reproduce the leak with a much narrowed down sample app.

Here's a list of places to start looking for correlations:

  • Are you using HTTP2 or HTTP1 only? Are you performing upgrades from H1 to H2?
  • Are you using websockets?
  • Are you using SSL?
  • Have you configured Jetty to perform GZIP compression?
  • Are you using the Jetty client (maybe via the proxy servlet)?

In the meantime, you can try limiting the maximum size of the memory retained by the buffer pool by configuring the maxBucketSize, maxHeapMemory and maxDirectMemory of the ArrayByteBufferPool. This could affect performance, but should limit the maximum memory Jetty uses for its buffers.

Finally, another thing worth mentioning: Jetty 12 is much more robust in the face of buffer leaks, as in between other improvements, it contains some code to detect and repair them most of the time so you may want to give it a shot if you can.

Hello, sorry for silence. I've been investigating another memory leak which has been occuring in caddy which proxies my jetty server. I've replaced caddy with nginx and the memory consumption rate has changed significantly. I'll continue observation for another couple of weeks.

If you have any idea how caddy could cause the memory leak in jetty, let me know

In the meanwhile here's the thread in the caddy repo's issue
caddyserver/caddy#6322 (comment)

image image

Caddy must be doing something slightly differently from nginx that's triggering the leak in Jetty, but it's hard to be more precise than that.

Something else that may help: we have backported the leak-tracking connection pool from Jetty 12, and it's going to land in the soon-to-be-released Jetty 11.0.23. Once that release is out, you may want to try configuring the leak tracking connection pool, wait until you can confirm the memory consumption goes to an abnormal level then take a heap dump.

The tracking pool should then contain some fairly precise information about the root cause of the leak that should help us track it down.