Substantial overhead when logging Dynamic Metadata in Access Logging
DuncanDoyle opened this issue · comments
Gloo Edge Product
Enterprise
Gloo Edge Version
v1.16.6
Kubernetes Version
?
Describe the bug
We found that every DYNAMIC_METADATA
log field penalty is ~10 microseconds in P50th and 20+ microseconds in P99th. The regular “key: value” log field penalty is ~1.7 microseconds in P50 and 3.3 in P99th.
The tests were done on the same cluster using k6 loadgen tool and nginx (openResty) simulated backend. The results were almost similar while set_metadata and extProc filters. It was found that the hard disk is not a bottleneck in this case.
As can be seen in the table, the gateway-proxy CPU is growing very much as a factor of dynamic metadata fields.
Expected Behavior
Logging dynamic metadata in accesslogging should not cause substantial processing overhead.
Steps to reproduce the bug
n.a
Additional Environment Detail
No response
Additional Context
No response
┆Issue is synchronized with this Asana task by Unito
Closing for now as a wont do any more on this. Have some collateral in slab for anyone that is interested.
This will also be mitigated once command formatters for typed metadata is added