Per-message deflate cannot be disabled
jhurliman opened this issue · comments
Description
The C++ websocket servers (TLS and non-TLS variants) are configured with per-message deflate, which adds CPU overhead on the server and clients without any bandwidth improvement if the socket is mostly transmitting compressed images. This is a common configuration for robots with one or more cameras.
- Version: latest
- Platform: all
Steps To Reproduce
Include the foxglove_websocket C++ library in a project and use it.
Expected Behavior
Per-message deflate is configurable, and possibly disabled by default.
Compression is set on a message level:
Although I'm honestly not sure if this disables per-message deflate for that message. If not, then this is definitely a bug
Aha, I missed that part. I just saw libzlib
being linked in and the per-message deflate option enabled when constructing the websocketpp server, and assumed it was on for all messages.
I can look at the handshake in Chrome dev tools to confirm the server and client agree on the per-message deflate extension, but I can't figure out how to see if a particular message was actually transmitted compressed or not.
but I can't figure out how to see if a particular message was actually transmitted compressed or not.
I had the same problem. Only way to find out is probably to sniff on the TCP traffic. Or alternatively, turn on compression and see if it is slower or not.
I could confirm using foxglove bridge that CPU consumption is significantly higher when setting options.useCompression
to true
(compared to when false
, the default).