elastic / helm-charts

You know, for Kubernetes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Elasticsearch authentication password secret not generated

amiros89 opened this issue · comments

Chart version: 7.17.3

Kubernetes version: 1.23

Kubernetes provider: EKS

Helm Version: v3.6.0

helm get release output

Output of helm get release NAME: elasticsearch LAST DEPLOYED: Thu Oct 6 16:42:22 2022 NAMESPACE: elk STATUS: deployed REVISION: 2 USER-SUPPLIED VALUES: secret: enabled: true password: password

COMPUTED VALUES:
antiAffinity: hard
antiAffinityTopologyKey: kubernetes.io/hostname
clusterDeprecationIndexing: "false"
clusterHealthCheckParams: wait_for_status=green&timeout=1s
clusterName: elasticsearch
enableServiceLinks: true
envFrom: []
esConfig: {}
esJavaOpts: ""
esJvmOptions: {}
esMajorVersion: ""
extraContainers: []
extraEnvs: []
extraInitContainers: []
extraVolumeMounts: []
extraVolumes: []
fsGroup: ""
fullnameOverride: ""
healthNameOverride: ""
hostAliases: []
httpPort: 9200
image: docker.elastic.co/elasticsearch/elasticsearch
imagePullPolicy: IfNotPresent
imagePullSecrets: []
imageTag: 7.17.3
ingress:
annotations: {}
className: nginx
enabled: false
hosts:

  • host: chart-example.local
    paths:
    • path: /
      pathtype: ImplementationSpecific
      tls: []
      initResources: {}
      keystore: []
      labels: {}
      lifecycle: {}
      masterService: ""
      maxUnavailable: 1
      minimumMasterNodes: 2
      nameOverride: ""
      networkHost: 0.0.0.0
      networkPolicy:
      http:
      enabled: false
      transport:
      enabled: false
      nodeAffinity: {}
      nodeGroup: master
      nodeSelector: {}
      persistence:
      annotations: {}
      enabled: true
      labels:
      enabled: false
      podAnnotations: {}
      podManagementPolicy: Parallel
      podSecurityContext:
      fsGroup: 1000
      runAsUser: 1000
      podSecurityPolicy:
      create: false
      name: ""
      spec:
      fsGroup:
      rule: RunAsAny
      privileged: true
      runAsUser:
      rule: RunAsAny
      seLinux:
      rule: RunAsAny
      supplementalGroups:
      rule: RunAsAny
      volumes:
    • secret
    • configMap
    • persistentVolumeClaim
    • emptyDir
      priorityClassName: ""
      protocol: http
      rbac:
      automountToken: true
      create: false
      serviceAccountAnnotations: {}
      serviceAccountName: ""
      readinessProbe:
      failureThreshold: 3
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 3
      timeoutSeconds: 5
      replicas: 3
      resources:
      limits:
      cpu: 1000m
      memory: 2Gi
      requests:
      cpu: 1000m
      memory: 2Gi
      roles:
      data: "true"
      ingest: "true"
      master: "true"
      ml: "true"
      remote_cluster_client: "true"
      schedulerName: ""
      secret:
      enabled: true
      password: quali
      secretMounts: []
      securityContext:
      capabilities:
      drop:
    • ALL
      runAsNonRoot: true
      runAsUser: 1000
      service:
      annotations: {}
      enabled: true
      externalTrafficPolicy: ""
      httpPortName: http
      labels: {}
      labelsHeadless: {}
      loadBalancerIP: ""
      loadBalancerSourceRanges: []
      nodePort: ""
      publishNotReadyAddresses: false
      transportPortName: transport
      type: ClusterIP
      sysctlInitContainer:
      enabled: true
      sysctlVmMaxMapCount: 262144
      terminationGracePeriod: 120
      tests:
      enabled: true
      tolerations: []
      transportPort: 9300
      updateStrategy: RollingUpdate
      volumeClaimTemplate:
      accessModes:
  • ReadWriteOnce
    resources:
    requests:
    storage: 30Gi

HOOKS:

Source: elasticsearch/templates/test/test-elasticsearch-health.yaml

apiVersion: v1
kind: Pod
metadata:
name: "elasticsearch-zrhie-test"
annotations:
"helm.sh/hook": test
"helm.sh/hook-delete-policy": hook-succeeded
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
containers:

  • name: "elasticsearch-odqoc-test"
    image: "docker.elastic.co/elasticsearch/elasticsearch:7.17.3"
    imagePullPolicy: "IfNotPresent"
    command:
    • "sh"
    • "-c"
    • |
      #!/usr/bin/env bash -e
      curl -XGET --fail 'elasticsearch-master:9200/_cluster/health?wait_for_status=green&timeout=1s'
      restartPolicy: Never
      MANIFEST:

Source: elasticsearch/templates/poddisruptionbudget.yaml

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: "elasticsearch-master-pdb"
spec:
maxUnavailable: 1
selector:
matchLabels:
app: "elasticsearch-master"

Source: elasticsearch/templates/service.yaml

kind: Service
apiVersion: v1
metadata:
name: elasticsearch-master
labels:
heritage: "Helm"
release: "elasticsearch"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
{}
spec:
type: ClusterIP
selector:
release: "elasticsearch"
chart: "elasticsearch"
app: "elasticsearch-master"
publishNotReadyAddresses: false
ports:

  • name: http
    protocol: TCP
    port: 9200
  • name: transport
    protocol: TCP
    port: 9300

Source: elasticsearch/templates/service.yaml

kind: Service
apiVersion: v1
metadata:
name: elasticsearch-master-headless
labels:
heritage: "Helm"
release: "elasticsearch"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve

Create endpoints also if the related pod isn't ready

publishNotReadyAddresses: true
selector:
app: "elasticsearch-master"
ports:

  • name: http
    port: 9200
  • name: transport
    port: 9300

Source: elasticsearch/templates/statefulset.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch-master
labels:
heritage: "Helm"
release: "elasticsearch"
chart: "elasticsearch"
app: "elasticsearch-master"
annotations:
esMajorVersion: "7"
spec:
serviceName: elasticsearch-master-headless
selector:
matchLabels:
app: "elasticsearch-master"
replicas: 3
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:

  • metadata:
    name: elasticsearch-master
    spec:
    accessModes:

    • ReadWriteOnce
      resources:
      requests:
      storage: 30Gi
      template:
      metadata:
      name: "elasticsearch-master"
      labels:
      release: "elasticsearch"
      chart: "elasticsearch"
      app: "elasticsearch-master"
      annotations:

    spec:
    securityContext:
    fsGroup: 1000
    runAsUser: 1000
    automountServiceAccountToken: true
    affinity:
    podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
    matchExpressions:
    - key: app
    operator: In
    values:
    - "elasticsearch-master"
    topologyKey: kubernetes.io/hostname
    terminationGracePeriodSeconds: 120
    volumes:
    enableServiceLinks: true
    initContainers:

    • name: configure-sysctl
      securityContext:
      runAsUser: 0
      privileged: true
      image: "docker.elastic.co/elasticsearch/elasticsearch:7.17.3"
      imagePullPolicy: "IfNotPresent"
      command: ["sysctl", "-w", "vm.max_map_count=262144"]
      resources:
      {}

    containers:

    • name: "elasticsearch"
      securityContext:
      capabilities:
      drop:
      - ALL
      runAsNonRoot: true
      runAsUser: 1000
      image: "docker.elastic.co/elasticsearch/elasticsearch:7.17.3"
      imagePullPolicy: "IfNotPresent"
      readinessProbe:
      exec:
      command:
      - bash
      - -c
      - |
      set -e
      # If the node is starting up wait for the cluster to be ready (request params: "wait_for_status=green&timeout=1s" )
      # Once it has started only check that the node itself is responding
      START_FILE=/tmp/.es_start_file

          # Disable nss cache to avoid filling dentry cache when calling curl
          # This is required with Elasticsearch Docker using nss < 3.52
          export NSS_SDB_USE_CACHE=no
      
          http () {
            local path="${1}"
            local args="${2}"
            set -- -XGET -s
      
            if [ "$args" != "" ]; then
              set -- "$@" $args
            fi
      
            if [ -n "${ELASTIC_PASSWORD}" ]; then
              set -- "$@" -u "elastic:${ELASTIC_PASSWORD}"
            fi
      
            curl --output /dev/null -k "$@" "http://127.0.0.1:9200${path}"
          }
      
          if [ -f "${START_FILE}" ]; then
            echo 'Elasticsearch is already running, lets check the node is healthy'
            HTTP_CODE=$(http "/" "-w %{http_code}")
            RC=$?
            if [[ ${RC} -ne 0 ]]; then
              echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with RC ${RC}"
              exit ${RC}
            fi
            # ready if HTTP code 200, 503 is tolerable if ES version is 6.x
            if [[ ${HTTP_CODE} == "200" ]]; then
              exit 0
            elif [[ ${HTTP_CODE} == "503" && "7" == "6" ]]; then
              exit 0
            else
              echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}"
              exit 1
            fi
      
          else
            echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )'
            if http "/_cluster/health?wait_for_status=green&timeout=1s" "--fail" ; then
              touch ${START_FILE}
              exit 0
            else
              echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
              exit 1
            fi
          fi
      

      failureThreshold: 3
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 3
      timeoutSeconds: 5
      ports:

      • name: http
        containerPort: 9200
      • name: transport
        containerPort: 9300
        resources:
        limits:
        cpu: 1000m
        memory: 2Gi
        requests:
        cpu: 1000m
        memory: 2Gi
        env:
        • name: node.name
          valueFrom:
          fieldRef:
          fieldPath: metadata.name
        • name: cluster.initial_master_nodes
          value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2,"
        • name: discovery.seed_hosts
          value: "elasticsearch-master-headless"
        • name: cluster.name
          value: "elasticsearch"
        • name: network.host
          value: "0.0.0.0"
        • name: cluster.deprecation_indexing.enabled
          value: "false"
        • name: node.data
          value: "true"
        • name: node.ingest
          value: "true"
        • name: node.master
          value: "true"
        • name: node.ml
          value: "true"
        • name: node.remote_cluster_client
          value: "true"
          volumeMounts:
        • name: "elasticsearch-master"
          mountPath: /usr/share/elasticsearch/data

NOTES:

  1. Watch all cluster members come up.
    $ kubectl get pods --namespace=elk -l app=elasticsearch-master -w2. Test cluster health using Helm test.
    $ helm --namespace=elk test elasticsearch

Describe the bug: Secret containing username and password to elastic is not generated upon helm install, accessing elastic on port 9200 is unprotected

Steps to reproduce:

  1. helm repo add elastic https://helm.elastic.co
  2. helm install elasticsearch elastic/elasticsearch
  3. kubectl get secrets

Expected behavior: According to the helm chart documentation, a secret with a randomly generated password should be created. Access to elastic should be protected with username and password.

Hey @amiros89, the documentation on main branch is for version in development and not released as mentioned in the warning block:

Screenshot 2022-11-09 at 18 20 35

The documentation for 7.17.3 release is here: https://github.com/elastic/helm-charts/blob/v7.17.3/elasticsearch/README.md

If you want to deploy 7.17.3 with security, you can find an example here: https://github.com/elastic/helm-charts/tree/7.17/elasticsearch/examples/security

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.