prometheus-operator / kube-prometheus

Use Prometheus to monitor Kubernetes and applications running on Kubernetes

Home Page:https://prometheus-operator.dev/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

kube-prometheus-release-0.13 不能与rancher 同时部署吗

yangrenyue opened this issue · comments

k8s 集群版本: v1.27.6

helm 部署rancher 2.8.1 版本
[root@k8s-master01 ~]$ helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
fleet cattle-fleet-system 1 2024-04-15 10:23:18.063754516 +0000 UTC deployed fleet-103.1.0+up0.9.0 0.9.0
fleet-crd cattle-fleet-system 1 2024-04-15 10:08:19.763368058 +0000 UTC deployed fleet-crd-103.1.0+up0.9.0 0.9.0
rancher cattle-system 1 2024-04-15 18:21:38.313943792 +0800 CST deployed rancher-2.8.1 v2.8.1
rancher-provisioning-capi cattle-provisioning-capi-system 2 2024-04-15 10:15:33.656932208 +0000 UTC deployed rancher-provisioning-capi-103.2.0+up0.0.1 1.4.4
rancher-webhook cattle-system 1 2024-04-15 10:23:42.811456982 +0000 UTC deployed rancher-webhook-103.0.1+up0.4.2 0.4.2

当我部署kube-prometheus-release-0.13 的时候, 发现与rancher之间有冲突, 这个问题应该如何解决那
[root@k8s-master01 ~/kube-prometheus-release-0.13]$ kubectl apply --server-side -f manifests/setup
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/prometheusagents.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/scrapeconfigs.monitoring.coreos.com serverside-applied
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com serverside-applied
namespace/monitoring serverside-applied
Apply failed with 1 conflict: conflict with "rancher" using apiextensions.k8s.io/v1: .spec.versions
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:

  • If you intend to manage all of these fields, please re-run the apply
    command with the --force-conflicts flag.
  • If you do not intend to manage all of the fields, please edit your
    manifest to remove references to the fields that should keep their
    current managers.
  • You may co-own fields by updating your manifest to match the existing
    value; in this case, you'll become the manager if the other manager(s)
    stop managing the field (remove it from their configuration).