prometheus-operator / kube-prometheus

Use Prometheus to monitor Kubernetes and applications running on Kubernetes

Home Page:https://prometheus-operator.dev/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Appears to be an issue with the Grafana grid.libsonnet

TimothySutton81 opened this issue · comments

What happened?
RUNTIME ERROR running https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/build.sh with grafana

RUNTIME ERROR: field does not exist: get
vendor/github.com/grafana/grafonnet/gen/grafonnet-v10.4.0/custom/util/./grid.libsonnet:119:25-32 thunk
vendor/github.com/grafana/grafonnet/gen/grafonnet-v10.4.0/custom/util/./grid.libsonnet:121:29-34 function
std.jsonnet:789:24-47 thunk
std.jsonnet:789:9-57 function
std.jsonnet:789:9-57 function
std.jsonnet:790:5-28 function
vendor/github.com/grafana/grafonnet/gen/grafonnet-v10.4.0/custom/util/./grid.libsonnet:(93:5)-(172:6) function
vendor/github.com/kubernetes-monitoring/kubernetes-mixin/dashboards/apiserver.libsonnet:(320:11)-(345:10) thunk
vendor/github.com/grafana/grafonnet/gen/grafonnet-v10.4.0/clean/../custom/dashboard.libsonnet:24:30-36 thunk
std.jsonnet:32:25
...
std.jsonnet:919:10-19 function
std.jsonnet:947:39-78 thunk <array_element>
std.jsonnet:951:9-28 function
std.jsonnet:952:5-23 function
vendor/github.com/brancz/kubernetes-grafana/grafana/grafana.libsonnet:100:25-79 object
vendor/github.com/brancz/kubernetes-grafana/grafana/grafana.libsonnet:100:15-81 object
vendor/github.com/brancz/kubernetes-grafana/grafana/grafana.libsonnet:(93:7)-(101:8) thunk <array_element>
vendor/github.com/brancz/kubernetes-grafana/grafana/grafana.libsonnet:(92:12)-(132:6) object
example.jsonnet:31:1-83 object
During manifestation

Did you expect to see some different?
For the build to complete with out errors
How to reproduce it (as minimally and precisely as possible):
b init # Creates the initial/empty jsonnetfile.json
jb install github.com/prometheus-operator/kube-prometheus/jsonnet/kube-prometheus@main
wget https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/build.sh -O build.sh
chmod +x build.sh
./build.sh

Environment
Using standard example.jsonnet creates this issue

  • Prometheus Operator version:
    This is latest from git

  • Kubernetes version information:
    Not even using kubectl

  • Kubernetes cluster kind:

k3s but not even at that point

  • Manifests:
insert manifests relevant to the issue

example.jsonnet:

ocal kp =
(import 'kube-prometheus/main.libsonnet') +
// Uncomment the following imports to enable its patches
// (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
// (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
// (import 'kube-prometheus/addons/node-ports.libsonnet') +
// (import 'kube-prometheus/addons/static-etcd.libsonnet') +
// (import 'kube-prometheus/addons/custom-metrics.libsonnet') +
// (import 'kube-prometheus/addons/external-metrics.libsonnet') +
// (import 'kube-prometheus/addons/pyrra.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
},
},
};

{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
{
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
} +
// { 'setup/pyrra-slo-CustomResourceDefinition': kp.pyrra.crd } +
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
// { ['pyrra-' + name]: kp.pyrra[name] for name in std.objectFields(kp.pyrra) if name != 'crd' } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) }

This was a go path issue on my part.