cloudfoundry / loggregator-release

Cloud Native Logging

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Debugging Container metrics not being populated

scottillogical opened this issue · comments

Hello loggregator freinds, we a using diego with loggregator (outside of cf release), app logs are retrieved just fine, but queries to /containermetrics fail. Are there any facilities I can use in Metron agent to turn on debugging to see if it is retrieving these container metrics (supposedly via UDP from Diego rep/executor)?

When I attach a firehose via noaa (even when I start up the emitter), I don't see container metrics like Cpu Percentage, but I don't have much to go on to debug - the rep logs look fine.

noaa git:(d641e20) ✗ go run container_metrics_sample/consumer/main.go
===== Streaming ContainerMetrics (will only succeed if you have admin credentials)

This always shows nothing no matter what I do basically

Reproduced this on both loggregator 73.0.1 & 99 and diego 1.8.1 && 1.26.2

Is there any way to confirm that the metrics are getting to Metron agent and being sent along to doppler? Thanks!

rep logs

timestamp":"1508937729.708360195","source":"guardian","message":"guardian.metrics.finished","log_level":1,"data":{"id":"32e65e76-d3b1-4ec6-5a61-6432","session":"16584"}}
{"timestamp":"1508937729.708677530","source":"guardian","message":"guardian.api.garden-server.bulk_metrics.got-bulkmetrics","log_level":1,"data":{"handles":["57806449-c9c6-4872-6a30-0c37","9d164aeb-b69b-4128-7611-82e6","6ab4ae09-74b5-4e30-5bdb-a8f3","06890abc-bea7-4ab1-6b04-4e6b","23893428-d7a5-4ead-5304-0b5b","80e33e84-4aa4-439e-4924-1b77","788b57ff-b26c-4ece-701c-695d","62dd4a6c-6d3e-48df-61e1-0933","df62c2c7-cffd-4ed9-61c5-f37e","b1d5702d-5530-4087-450d-ee3d","32e65e76-d3b1-4ec6-5a61-6432"],"session":"4.1.3522"}}

==> /var/vcap/sys/log/rep/rep.stdout.log <==
{"timestamp":"1508937729.717040777","source":"rep","message":"rep.container-metrics-reporter.tick.get-all-metrics.containerstore-metrics.getting-metrics-in-garden-complete","log_level":0,"data":{"session":"10.320.1.1"}}
{"timestamp":"1508937729.717803478","source":"rep","message":"rep.container-metrics-reporter.tick.get-all-metrics.containerstore-metrics.complete","log_level":1,"data":{"session":"10.320.1.1"}}
{"timestamp":"1508937729.718291759","source":"rep","message":"rep.container-metrics-reporter.tick.get-all-metrics.containerstore-list.starting","log_level":1,"data":{"session":"10.320.1.2"}}
{"timestamp":"1508937729.718639612","source":"rep","message":"rep.container-metrics-reporter.tick.get-all-metrics.containerstore-list.complete","log_level":1,"data":{"session":"10.320.1.2"}}
{"timestamp":"1508937729.718789101","source":"rep","message":"rep.container-metrics-reporter.tick.emitting","log_level":0,"data":{"get-metrics-took":"2.420619777s","session":"10.320","total-containers":0}}
{"timestamp":"1508937729.718861580","source":"rep","message":"rep.container-metrics-reporter.tick.done","log_level":0,"data":{"session":"10.320","took":"2.420703526s"}}

We have created an issue in Pivotal Tracker to manage this:

https://www.pivotaltracker.com/story/show/152278717

The labels on this github issue will be updated when the story is started.

This happens when you dont set MetricGuid on the LRP