prometheus / node_exporter

Exporter for machine metrics

Home Page:https://prometheus.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Metric was collected before with the same name and label values

gnanasalten opened this issue · comments

Host operating system: output of uname -a

Linux dc2cpoenrvmd534 3.10.0-1160.66.1.el7.x86_64 #1 SMP Wed May 18 16:02:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

node_exporter version: output of node_exporter --version

node_exporter, version 1.5.0 (branch: HEAD, revision: 1b48970)
build user: root@6e7732a7b81b
build date: 20221129-18:59:09
go version: go1.19.3
platform: linux/amd64

node_exporter command line flags

/usr/local/bin/node_exporter --collector.systemd --collector.sockstat --collector.filefd --collector.textfile.directory=/var/lib/node_exporter/

node_exporter log output

Sep 15 02:57:37 xxxxxxxxx node_exporter: ts=2023-09-15T02:57:37.684Z caller=stdlib.go:105 level=error msg="error gathering metrics: 17 error(s) occurred:\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/boot" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/boot" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var/log" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var/log/audit" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/boot" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/home" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/opt" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/tmp" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var/tmp" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/dev/shm" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/home" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var/log" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var/tmp" > untyped:<value:1 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric "node_fstab_mount_status" { label:<name:"filesystem" value:"/var/log/audit" > untyped:<value:1 > } was collected before with the same name and label values"

Are you running node_exporter in Docker?

No

What did you do that produced an error?

Scrape from prometheus

What did you expect to see?

No error

What did you see instead?

error

Can you provide your /etc/fstab and /proc/mounts?

Can you provide your /etc/fstab and /proc/mounts?
/etc/fstab

LABEL=img-rootfs / ext4 rw,relatime 0 1
LABEL=img-boot /boot ext4 rw,relatime 0 1
LABEL=fs_var /var ext4 rw,relatime 0 2
LABEL=fs_var_tmp /var/tmp ext4 rw,nosuid,nodev,noexec,relatime 0 2
LABEL=fs_var_log /var/log ext4 rw,relatime 0 3
LABEL=var_log_aud /var/log/audit ext4 rw,relatime 0 4
LABEL=fs_home /home ext4 rw,nodev,relatime 0 2
LABEL=fs_opt /opt ext4 rw,nodev,relatime 0 2
LABEL=fs_tmp /tmp ext4 rw,nodev,nosuid,noexec,relatime 0 2
tmpfs /dev/shm tmpfs nodev,nosuid,noexec 0 0

/proc/mounts

sysfs /sys sysfs rw,seclabel,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,seclabel,nosuid,size=3976556k,nr_inodes=994139,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,seclabel,nosuid,nodev 0 0
devpts /dev/pts devpts rw,seclabel,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,seclabel,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,seclabel,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,seclabel,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,seclabel,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,seclabel,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,seclabel,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,seclabel,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,seclabel,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,seclabel,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,seclabel,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,seclabel,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,seclabel,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,seclabel,nosuid,nodev,noexec,relatime,blkio 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
/dev/mapper/ubuntu_vg-lv_root / ext4 rw,seclabel,relatime,data=ordered 0 0
selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=36,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=12112 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,seclabel,relatime 0 0
mqueue /dev/mqueue mqueue rw,seclabel,relatime 0 0
/dev/mapper/ubuntu_vg-lv_home /home ext4 rw,seclabel,nodev,relatime,data=ordered 0 0
/dev/vda1 /boot ext4 rw,seclabel,relatime,data=ordered 0 0
/dev/mapper/ubuntu_vg-lv_opt /opt ext4 rw,seclabel,nodev,relatime,data=ordered 0 0
/dev/mapper/ubuntu_vg-lv_var /var ext4 rw,seclabel,relatime,data=ordered 0 0
/dev/mapper/ubuntu_vg-lv_var_tmp /var/tmp ext4 rw,seclabel,nosuid,nodev,noexec,relatime,data=ordered 0 0
/dev/mapper/ubuntu_vg-lv_var_log /var/log ext4 rw,seclabel,relatime,data=ordered 0 0
/dev/mapper/ubuntu_vg-lv_var_log_audit /var/log/audit ext4 rw,seclabel,relatime,data=ordered 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
/dev/mapper/ubuntu_vg-lv_tmp /tmp ext4 rw,seclabel,nodev,relatime,data=ordered 0 0
tmpfs /run/user/1000 tmpfs rw,seclabel,nosuid,nodev,relatime,size=800892k,mode=700,uid=1000,gid=1000 0 0

@discordianfish can you please help on this ?

node_fstab_mount_status{filesystem="/"} 1
node_fstab_mount_status{filesystem="/boot/efi"} 1
node_fstab_mount_status{filesystem="/home"} 1
node_fstab_mount_status{filesystem="/opt"} 1
node_fstab_mount_status{filesystem="/tmp"} 1
node_fstab_mount_status{filesystem="/var"} 1
node_fstab_mount_status{filesystem="/var/log"} 1
node_fstab_mount_status{filesystem="/var/log/audit"} 1
node_fstab_mount_status{filesystem="/var/tmp"} 1
node_fstab_mount_status{filesystem="/dev/shm"} 1
node_syslog_err_count 0
node_syslog_bad_block_count 0

Different versions of the fstab collection plugin maybe used.

@gnanasalten Can you priovide your textfile script?

if you use fstab-check.sh script, mountpoint will appear in the tag.

node_fstab_mount_status{mountpoint='xxx'} 1

node_exporter shouldn't fail that loudly when two mountpoints have the same path. That is a totally valid to do on linux.

@SuperSandro2000 It should not but I don't know if this is what is going on here

A similar problem happened to me.
I am using wsl2 and starting systemd.

node_exporter starts and issues the following error:

ts=2024-04-19T05:09:48.730Z caller=stdlib.go:105 level=error msg="error gathering metrics: 6 error(s) occurred:
* [from Gatherer #2] collected metric \"node_filesystem_device_error\" { label:{name:\"device\"  value:\"none\"}  label:{name:\"fstype\"  value:\"tmpfs\"}  label:{name:\"mountpoint\"  value:\"/run/desktop/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu-20.04/8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1/run/user\"}  gauge:{value:1}} was collected before with the same name and label values
* [from Gatherer #2] collected metric \"node_filesystem_device_error\" { label:{name:\"device\"  value:\"none\"}  label:{name:\"fstype\"  value:\"tmpfs\"}  label:{name:\"mountpoint\"  value:\"/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu-20.04/8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1/run/user\"}  gauge:{value:1}} was collected before with the same name and label values
* [from Gatherer #2] collected metric \"node_filesystem_device_error\" { label:{name:\"device\"  value:\"none\"}  label:{name:\"fstype\"  value:\"tmpfs\"}  label:{name:\"mountpoint\"  value:\"/parent-distro/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu-20.04/8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1/run/user\"}  gauge:{value:1}} was collected before with the same name and label values
* [from Gatherer #2] collected metric \"node_filesystem_readonly\" { label:{name:\"device\"  value:\"none\"}  label:{name:\"fstype\"  value:\"tmpfs\"}  label:{name:\"mountpoint\"  value:\"/run/desktop/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu-20.04/8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1/run/user\"}  gauge:{value:0}} was collected before with the same name and label values
* [from Gatherer #2] collected metric \"node_filesystem_readonly\" { label:{name:\"device\"  value:\"none\"}  label:{name:\"fstype\"  value:\"tmpfs\"}  label:{name:\"mountpoint\"  value:\"/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu-20.04/8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1/run/user\"}  gauge:{value:0}} was collected before with the same name and label values
* [from Gatherer #2] collected metric \"node_filesystem_readonly\" { label:{name:\"device\"  value:\"none\"}  label:{name:\"fstype\"  value:\"tmpfs\"}  label:{name:\"mountpoint\"  value:\"/parent-distro/mnt/host/wsl/docker-desktop-bind-mounts/Ubuntu-20.04/8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1/run/user\"}  gauge:{value:0}} was collected before with the same name and label values"

And no such directory exists on my system

/run/desktop/...
/parent-distro/...
/mnt/host/...

Hrm label:{name:\"device\" value:\"none\"} looks suspicious. Is there anything else in the log that would point to issue tretrieving the device? @SuperQ any ideas?