stefanprodan / swarmprom

Docker Swarm instrumentation with Prometheus, Grafana, cAdvisor, Node Exporter and Alert Manager

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Grafana reporting 171% available disk space

robertofabrizi opened this issue · comments

Bug Report

What did you do?
Deployed swarmprom in my Swarm cluster, logged into Grafana, and noticed that the available disk space exceeds 100%

prombug

What did you expect to see?
A value lower or at most equal to 100%

What did you see instead? Under which circumstances?
171%, every time

Is it a bug in the node-exporter data?
The df -h of the first of the two nodes is:

[msadmin@MS-DSC1 ~]$ df -h
Filesystem                                    Size  Used Avail Use% Mounted on
/dev/sda2                                      30G  4.1G   26G  14% /
devtmpfs                                      3.9G     0  3.9G   0% /dev
tmpfs                                         3.9G     0  3.9G   0% /dev/shm
tmpfs                                         3.9G  377M  3.6G  10% /run
tmpfs                                         3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1                                     497M  105M  392M  22% /boot
/dev/sdb1                                      16G   45M   15G   1% /mnt/resource
//msshare.file.core.windows.net/msshare  5.0T  8.5M  5.0T   1% /mnt/msshare
tmpfs                                         797M     0  797M   0% /run/user/1000

and the second is virtually identical.
Thank you,
Roberto

I had this problem, a while ago. The kernel on that node was old and was reporting different value than node_exporter was expecting.

@mterzo I'm finding it hard to understand why CentOS / RHEL come with the kernel 3.10, which has caused us a pletora of issues, this being the smallest one, the cgroup leak leading to kernel panics the biggest one...

The issue has been resolved by updating to the latest lt 4.4.x kernel from elrepo.