datalust / seq-logging

A Node.js client for the Seq HTTP ingestion API

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RFC - How to handle big messages

raybooysen opened this issue · comments

Hi Nicolas

We have some code that handles requests from a set of services. It doesn't know what type of service it's handling for and before it does something logs (through bunyan and bunyan-seq) the objects it'll work on.

This is fine (in-general) except for edge cases where the messages will not ever got to seq because of the max limits. In the code below, it first JSON.stringifys the object to do a length detection. In our edge-cases, the objects are big enough that the stringify blocks the thread for a significant time.

How could we handle this better?

https://github.com/datalust/seq-logging/blob/master/seq_logger.js#L180

Hi Ray, thanks for the note 👍

This one's tricky to solve generically without quite a bit of experimentation and infrastructure - my usual advice in the Serilog world would be to avoid serializing arbitrary objects, although in Serilog we do have some basic serialization breadth/depth limits that sometimes prevent the performance impact from becoming visible.

Depending on how your logged objects look, though, you may be able to "clone" them to only a particular depth/breadth before passing them through the logger. A modified version of something like this that just trimmed off the object by nulling its properties beyond some certain depth could do it.

Thoughts?

Seems sensible. There are other alternatives like: https://github.com/miktam/sizeof#readme

which recursively walks an object and gives you an approx "size" in memory, however I've not tested this anywhere.