aws / amazon-cloudwatch-logs-for-fluent-bit

A Fluent Bit output plugin for CloudWatch Logs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is there a way to specify delay interval ?

mohan-mu opened this issue · comments

commented

Hello Maintainers & contributors,

I have a question

Is there a way to specify delay interval ?

For Example : Instead of Pushing logs realtime . i would like to push logs to cloudwatch every 10 minutes or grater

commented

Can you provide more information about your use case? Are you wanting to have a consistent 10 minute delay from logs being generated to logs being shipped, or do you want all logs collected within some delay interval to be batched and shipped all at once?

commented

Thank you @sonofachamp ,

In my use case, I do want all logs collected within some delay interval to be batched and shipped all at once

For Example : Suppose ,imagine i specified delay as 10 minutes .I would like to store logs in a buffer and ship collected logs every 10 minutes

Well my intention is that , i need to push all logs to cloudwatch with specified delay ,without losing any logs in between .

commented

Thanks for the info. Can you help me understand why you want to do this as opposed to streaming your logs to your destination more frequently? I think there a lot of downsides and risks associated with large local buffering depending on how many logs your system is generating.

The [SERVICE] configuration section of Fluent Bit supports a Flush option which determines how often data is passed to output plugins from being collected by input plugins. I wasn't able to find what the maximum value supported is (if any), so maybe this will suit your needs?

commented

Thanks for your info.
Well, Right now we are streaming our logs , and it's taking average of 30-40GB/month of storage in cloudwatch & its costing us too much then expected , we are thinking of optimization techniques to lower our expenses .

commented

Got it. To be clear, less frequent flushing will not reduce the amount of logs shipped, only the frequency, so if your cost is driven by storage then I don't think this will help. I would suggest looking into reducing the amount of logging your service(s) are generating by reducing the log level (i.e. not DEBUG) where possible and trimming log messages to only relevant information.

You could use Fluent Bit to do more complex log routing to ship majority of logs (non-critical) to a cheaper data store and more critical logs to CloudWatch for example, but I would suggest first looking into reducing amount of logs generated before considering this more complex setup.

commented

Thank you