containers / prometheus-podman-exporter

Prometheus exporter for podman environments exposing containers, pods, images, volumes and networks information.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

bug: --collector.store_labels does not work - even with 1.90 version

wally007 opened this issue · comments

Describe the bug
I'm expecting with 1.90 version to get container name in the container metrics.

To Reproduce
Install 9.3 RHEL, run a container, install 1.90 podman_exporter.

Service is up and running

● prometheus-podman-exporter.service - Prometheus exporter for podman (v4) machine
     Loaded: loaded (/etc/systemd/system/prometheus-podman-exporter.service; enabled; preset: disabled)
     Active: active (running) since Tue 2024-03-12 15:50:44 UTC; 2h 26min ago
   Main PID: 861 (prometheus-podm)
      Tasks: 18 (limit: 100374)
     Memory: 161.2M
        CPU: 15.711s
     CGroup: /system.slice/prometheus-podman-exporter.service
             └─861 /usr/bin/prometheus-podman-exporter --collector.store_labels --collector.enable-all --web.listen-address 127.0.0.1:9882

Mar 12 15:50:44 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:44.651Z caller=handler.go:104 level=info collector=container
Mar 12 15:50:44 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:44.651Z caller=handler.go:104 level=info collector=image
Mar 12 15:50:44 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:44.651Z caller=handler.go:104 level=info collector=network
Mar 12 15:50:44 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:44.651Z caller=handler.go:104 level=info collector=pod
Mar 12 15:50:44 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:44.651Z caller=handler.go:104 level=info collector=system
Mar 12 15:50:44 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:44.651Z caller=handler.go:104 level=info collector=volume
Mar 12 15:50:44 podman1 podman[861]: 2024-03-12 15:50:44.824891671 +0000 UTC m=+0.418275272 system refresh
Mar 12 15:50:45 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:45.132Z caller=exporter.go:82 level=info msg="Listening on" address=127.0.0.1:9882
Mar 12 15:50:45 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:45.132Z caller=tls_config.go:313 level=info msg="Listening on" address=127.0.0.1:9882
Mar 12 15:50:45 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:45.133Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=127.0.0.1:9882
root@podman1:~>curl -s  http://127.0.0.1:9882/metrics | grep podman_container_mem_usage_bytes
# HELP podman_container_mem_usage_bytes Container memory usage.
# TYPE podman_container_mem_usage_bytes gauge
podman_container_mem_usage_bytes{id="6ac5cbf017e9",pod_id="",pod_name=""} 9.555968e+06
podman_container_mem_usage_bytes{id="751fd678fd81",pod_id="",pod_name=""} 4.0337408e+07
podman_container_mem_usage_bytes{id="77fded31009b",pod_id="",pod_name=""} 3.8244352e+07
podman_container_mem_usage_bytes{id="872c35951a71",pod_id="",pod_name=""} 1.10112768e+08
podman_container_mem_usage_bytes{id="9c4b67edce43",pod_id="a02f37390a33",pod_name="grafana-agent"} 2.4784896e+07
podman_container_mem_usage_bytes{id="ad411a795efc",pod_id="a02f37390a33",pod_name="grafana-agent"} 2.5776128e+07
podman_container_mem_usage_bytes{id="b51846f7b274",pod_id="",pod_name=""} 1.9447808e+07
podman_container_mem_usage_bytes{id="c3f7409463e1",pod_id="",pod_name=""} 2.9732864e+07
podman_container_mem_usage_bytes{id="d769648ee679",pod_id="a02f37390a33",pod_name="grafana-agent"} 5.9138048e+07
podman_container_mem_usage_bytes{id="e96d06c68e45",pod_id="a02f37390a33",pod_name="grafana-agent"} 430080
podman_container_mem_usage_bytes{id="ef392d0f4f36",pod_id="a02f37390a33",pod_name="grafana-agent"} 57344
podman_container_mem_usage_bytes{id="f220eaa7b913",pod_id="a02f37390a33",pod_name="grafana-agent"} 2.07048704e+08

Expected behavior
I expect to see container name in all container metrics.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

root@podman1:~>cat /etc/redhat-release
Red Hat Enterprise Linux release 9.3 (Plow)
root@podman1:~>
root@podman1:~>podman -v
podman version 4.6.1

Hi @wally007

This is not actually a bug, --collector.store_labels option will not add containers name to the metrics, in fact it will only convert podman pod/containers/image labels to prometheus metric labels.

e.g. podman inspect to print labels

podman container inspect ae9092d35296-infra --format "{{ .Config.Labels }}"
map[io.buildah.version:1.33.3]

e.g. exporter metrics:

podman_container_info{id="ef06328bce49", io_buildah_version="1.33.3", ...... ,ports=""} 1

I have opened a new #206 which will cover your need.

Regards

Hi @navidys ,

oh ... in that case I completely misunderstood the feature. Been waiting for "--collector.store_labels" since 1.6 or 1.7 :-)

Could you please also add image names to the image metrics ?

podman_image_size{id="ef5d4631a596",repository="<none>",tag="<none>"} 8.79890719e+08

Sure it will.
There will be a new cli option --enhance-metrics which will enhance the metrics to have same fields (labels) as for their podman_<.....>_info metric.

@navidys thank you - I will close this "issue" then, if that is ok.