fluxcd / helm-operator

Successor: https://github.com/fluxcd/helm-controller — The Flux Helm Operator, once upon a time a solution for declarative Helming.

Home Page:https://docs.fluxcd.io/projects/helm-operator/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Helm Operator Warns About Failure To Annotate Resources

jeff-minard-ck opened this issue · comments

Helm Operator reporting error applying annotations:

failed to annotate release resources: error: arguments in resource/name form must have a single resource and name

This appears to be a notification from kubectl in which something like this was done:

kubectl annotate --overwrite --namespace kube-system service/heapster-telegraf /heapster-telegraf test=anno

Note the missing <type> in front of the second item.

This code is run from the annotator where it compiles the list from namespacedResourceMap.

My guess is that something in the code res := obj.GetKind() + "/" + obj.GetName() is returning an empty string causing the above error from kubectl.

To Reproduce

I don't have a HelmRelease or chart I can share for this error.

Expected behavior

Annotations are applied to all valid resources and invalid ones are warnings.

Additional context

  • Helm Operator version: 1.4.0
  • Kubernetes version: 1.18

(Yes, v2, I hear ya -- we're eagerly awaiting a solution option for the no cross namespace valuesFrom as we're one of those groups that leans very heavily on that v1 feature.)

I can provide a slightly sanitized version of the helm manifest from the HelmRelease which issues one of these warnings (several do).

manifest.yaml
---
# Source: ourchart/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: ourchart
    chart: "ourchart-1.0.0"
    heritage: Helm
    release: kube-system-heapster-telegraf
  name: "kube-system-heapster-telegraf-heapster-telegraf-0-0-1-156-8d91b"
---
# Source: ourchart/templates/app-specific/infsvcs_k8s-heapster.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app: ourchart
    chart: "ourchart-1.0.0"
    heritage: Helm
    release: kube-system-heapster-telegraf
  name: "kube-system-heapster-telegraf-heapster-telegraf-0-0-1-156-8d91b"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: "system:heapster"
subjects:
- kind: ServiceAccount
  name: "kube-system-heapster-telegraf-heapster-telegraf-0-0-1-156-8d91b"
  namespace: kube-system

#
---
# Source: ourchart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: "heapster-telegraf"
  labels:
    app: "kube-system-heapster-telegraf-ourchart"
    created_by: "ourchart-chart"
spec:
  type: ClusterIP
  ports:
  - name: http
    port: 80
    targetPort: 8082
    protocol: TCP
  selector:
    app: "kube-system-heapster-telegraf-ourchart"
    deployment: ""
---
# Source: ourchart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: "heapster-telegraf"
  labels:
    app: "kube-system-heapster-telegraf-ourchart"
    created_by: "ourchart-chart"
  annotations:
    creditkarma.com/all: "true"
    sidecar-injector.creditkarma.com/traffic-proxy: "disabled"
spec:
  selector:
    matchLabels:
      app: "kube-system-heapster-telegraf-ourchart"
      created_by: "ourchart-chart"
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: "0%"
      maxUnavailable: "5%"
  replicas: 1
  revisionHistoryLimit: 2
  template:
    metadata:
      name: "heapster-telegraf"
      labels:
        app: "kube-system-heapster-telegraf-ourchart"
        created_by: "ourchart"
      annotations:
        ours/all: "true"
        sidecarthing: "disabled"
    spec:
      dnsPolicy: ClusterFirst
      hostNetwork: false
      serviceAccountName: "kube-system-heapster-telegraf-heapster-telegraf-0-0-1-156-8d91b"
      containers:
      - name: ourchart
        image: "image/path/k8s-heapster:0.0.1-1564776195-2aba5fa1d4a8"
        imagePullPolicy: "IfNotPresent"
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        env:
        - name: K8S_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: NODE_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_ID
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        args:
          - /heapster
          - --source=kubernetes.summary_api:?host_id_annotation=container.googleapis.com/instance_id
          - --sink=influxdb:http://$(NODE_IP):2200?$(INFLUXDB_OPTIONS)
        ports:
        - name: "http"
          containerPort: 8082
          protocol: TCP
        resources:
          limits:
            cpu: 10000m
            memory: 1024Mi
          requests:
            cpu: 1000m
            memory: 1024Mi

Thank you for the report. Please bear with me as I familiarize myself with the details of this case.

We're sorry about the trouble that you have experienced with Helm Operator!

Sorry if your issue remains unresolved. The Helm Operator is in maintenance mode, we recommend everybody upgrades to Flux v2 and Helm Controller.

A new release of Helm Operator is out this week, 1.4.4.

We will continue to support Helm Operator in maintenance mode for an indefinite period of time, and eventually archive this repository.

Please be aware that Flux v2 has a vibrant and active developer community who are actively working through minor releases and delivering new features on the way to General Availability for Flux v2.

In the mean time, this repo will still be monitored, but support is basically limited to migration issues only. I will have to close many issues today without reading them all in detail because of time constraints. If your issue is very important, you are welcome to reopen it, but due to staleness of all issues at this point a new report is more likely to be in order. Please open another issue if you have unresolved problems that prevent your migration in the appropriate Flux v2 repo.

Helm Operator releases will continue as possible for a limited time, as a courtesy for those who still cannot migrate yet, but these are strongly not recommended for ongoing production use as our strict adherence to semver backward compatibility guarantees limit many dependencies and we can only upgrade them so far without breaking compatibility. So there are likely known CVEs that cannot be resolved.

We recommend upgrading to Flux v2 which is actively maintained ASAP.

I am going to go ahead and close every issue at once today,
Thanks for participating in Helm Operator and Flux! 💚 💙