akuity / kargo

Application lifecycle orchestration

Home Page:https://kargo.akuity.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Helm Chart: Number of indentation when adding values to Dex Server Deployment is incorrect

alexlebens opened this issue · comments

I have been attempting to install Kargo from the Helm Chart and to configure it with OIDC with the Dex Server. However my provider was giving an error of missing client ID when logging in.

I had added the client id and client secret in a secret and imported it using the following path in the values file. I then followed Dex's documentation to utilize the client id and client secret from the env.

api:
  oidc:
    dex:
      envFrom:
        - secretRef:
            name: kargo-oidc-secret

However upon investigation the deployment resource in my cluster had no fields for envFrom, and Dex was providing no client id or client secret to my provider.

Looking over the following template I saw that it was adding the values from the above path, but the nindent is not set correctly. Resources and securityContext are set correctly to 10 indentation, but volumeMounts, envFrom, and env are set to 8.

dex-server deployment template

      containers:
      - name: dex-server
        image: {{ .Values.api.oidc.dex.image.repository }}:{{ .Values.api.oidc.dex.image.tag }}
        imagePullPolicy: {{ .Values.api.oidc.dex.image.pullPolicy }}
        command: ["dex", "serve"]
        args: ["/etc/dex/config.yaml"]
        {{- with (concat .Values.global.env .Values.api.oidc.dex.env) }}
        env:
          {{- toYaml . | nindent 8 }}
        {{- end }}
        {{- with (concat .Values.global.envFrom .Values.api.oidc.dex.envFrom) }}
        envFrom:
          {{- toYaml . | nindent 8 }}
        {{- end }}
        volumeMounts:
        - mountPath: /etc/dex
          name: config
          readOnly: true
        {{- if .Values.api.oidc.dex.volumeMounts }}
          {{- toYaml .Values.api.oidc.dex.volumeMounts | nindent 8 }}
        {{- end }}
        securityContext:
          {{- toYaml .Values.api.oidc.dex.securityContext | nindent 10 }}
        resources:
          {{- toYaml .Values.api.oidc.dex.resources | nindent 10 }}
        {{- if .Values.api.oidc.dex.probes.enabled }}
        livenessProbe:
          httpGet:
            path: /healthz/live
            port: 5558
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 1
          successThreshold: 1
          failureThreshold: 300
        readinessProbe:
          httpGet:
            path: /healthz/ready
            port: 5558
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 1
          successThreshold: 1
          failureThreshold: 300
          {{- end }}

I do not think the issue you are experiencing is due to the number of indentations, as the difference between 8 and 10 is only cosmetic (2 additional spaces for list items vs none).

Instead, I think you are probably trying to make use of a chart feature not yet available in 0.5.x. See: https://github.com/akuity/kargo/tree/v0.5.2/charts/kargo

Then should adding the envs with each key work with the 0.5.2? I added the envs to the dex-server deployment in this way instead and the deployment does not container them either.

api:
  oidc:
    dex:
        env:
          - name: CLIENT_ID
            valueFrom:
              secretKeyRef:
                name: kargo-oidc-secret
                key: CLIENT_ID
          - name: CLIENT_SECRET
            valueFrom:
              secretKeyRef:
                name: kargo-oidc-secret
                key: CLIENT_SECRET   

The deployment template container spec:

      containers:
      - name: dex-server
        image: ghcr.io/dexidp/dex:v2.37.0
        imagePullPolicy: IfNotPresent
        command: ["dex", "serve"]
        args: ["/etc/dex/config.yaml"]
        volumeMounts:
        - mountPath: /etc/dex
          name: config
          readOnly: true
        resources:
          limits:
            cpu: 100m
            memory: 128Mi
          requests:
            cpu: 100m
            memory: 128Mi
        livenessProbe:
          httpGet:
            path: /healthz/live
            port: 5558
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 1
          successThreshold: 1
          failureThreshold: 300
        readinessProbe:
          httpGet:
            path: /healthz/ready
            port: 5558
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 1
          successThreshold: 1
          failureThreshold: 300

Sorry, my mistake, I have not been using an updated chart.