zammad / zammad-helm

Zammad Helm chart for Kubernetes

Home Page:https://artifacthub.io/packages/helm/zammad/zammad

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[FR]: Arbitrary UID and GID, instead of 1000

klml opened this issue · comments

Version of Helm and Kubernetes:

  • Kubernetes Version: v1.24.6+5157800
  • helm version: v3.9.4

What happened

We want to run zammad on Openshift (<4.10). For Openshift UID (and GID) must be in an arbitrary range.
Currently zammad depends on UID 1000, so I get

Warning   FailedCreate        statefulset/zammad                                       create Pod zammad-0 in StatefulSet zammad failed error:
 pods "zammad-0" is forbidden: unable to validate against any security context constraint:
 [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted:
 .spec.securityContext.fsGroup: Invalid value: []int64{1000}: 1000 is not an allowed group, 
spec.initContainers[0].securityContext.runAsUser: Invalid value: 1000: must be in the ranges: [1000890000, 1000899999], 
spec.initContainers[1].securityContext.runAsUser: Invalid value: 1000: must be in the ranges: [1000890000, 1000899999], ....

What you expected to happen

zammad user UID and GID can be controlled by the Kubernetes /Openshift-Cluster

This feature request is even more far-reaching than Change zammad user UID and GID from 1000 to 1111 #305

Duplicate of #305

Linked issue has reasonings on why this is not going to happen.

@monotek @MrGeneration please allow me to reopen this issue. I'll explain the situation and our current ideas below.

Goal

We want to enable Zammad to be run in OpenShift environments with the recommended security best practices.

Current Situation

  • The zammad-docker-compose container already creates an unprivileged user (UID 1000), which is used by default.
  • The helm chart enforces this user in its securityContext.
  • It looks like it is already possible to specify a different unprivileged user in the securityContext. @klml already uses this to run it with UID 1002, and it does seem to work already. This is good and means we probably don't need to make changes to the image itself!
  • However, this goes against the OpenShift security recommendations, which request working with dynamic UIDs not previously known.

Ideas / Next Steps

  • Klaus will come up with a proposal to handle dynamic UID values which cannot be specified "hardcoded" in the helm chart. This might involve:
    • Somehow passing the UID from OpenShift to the helm chart in a dynamic way (e.g. via ENV variable), as an additional option.
    • Tweaking all places that use runAsUser and runAsGroup to be compatible with that as well.
    • Maybe other things that we find along the way?

If you have any feedback or suggestions for us, we'd highly appreciate them.

I guess this would only work as long we have the process of copying the files from the containers tmp to the mounted volume, as the fsGroup setting does a chown to the files, after mounting them.

It's only done that way because some fileuploads (images?) were stored in the filesystem and thats the way to keep them during an update. Imho this is not the case anmyore or can be prevented by putting everything in the db.

So my plan is to finaly get rid of this copy process, having the zammad files in the right place already, being in readonly mode too, owned by zammad user (id 1000).

If you change the user you could likely not read them anymore, without making them world readable.

That sounds great. We could consider using GID 0 rather than a custom zammad group. That's a recommended way for migrating to OpenShift, as the dynamic user has GID 0. Then we wouldn't need any chown at all, I guess.

https://developers.redhat.com/blog/2020/10/26/adapting-docker-and-kubernetes-containers-to-run-on-red-hat-openshift-container-platform#runtime_user_compatibility_with_kubernetes

You need the fsGroup setting (and the chown it does) as without it zammads volume mount would be owned by root and any nonroot user (includeing zammd / 1000) could not read / write it anymore.

In my case containers using gid 0 would be even blocked by Kyverno policy controller: https://kyverno.io/policies/other/require-non-root-groups/require-non-root-groups/

Imho Openshift should not be the reason to change the container image or the default security settings of the helm chart.

I guess using a separate / derived chart for OpenShift, which perhaps shares code with the "default" kubernetes chart is an option if that is what it takes to satisfy both worlds.

Not sure why this would be needed?
Currently all the user id's are already configurable, since #161 was merged.

If there is more special openshift stuff needed, postrenderers could be used too: https://helm.sh/docs/topics/advanced/#post-rendering

Sorry for my delay. I had to get used to it first;)

Somehow passing the UID from OpenShift to the helm chart in a dynamic way (e.g. via ENV variable), as an additional option.

Sorry , I got this wrong. This is not needed.

I thougt I have to deal with this chown in the "data-chmod"-initContainer, but this initContainer is not enabled in default, and so I do in Openshift.

This chart works fine with arbitrary uid , when I remove:

securityContext:
  fsGroup: 1000
  runAsUser: 1000
  runAsNonRoot: true
  runAsGroup: 1000
zammadConfig:
  initContainers:
    zammad:
      securityContext:
        runAsUser: 0

So I would suggest to add a switch for both, to make this configurable. In bitnami elasticsearch chart I am already using master.podSecurityContext.enabled, to disable the static Security Context , and let Openshift set the UID .
Would that also be a solution for here?

You should already be able to overwrite the securityContext with an empty map. Have you tried that?

@monotek thx til "securityContext with an empty map"

So this works:

# OpenShift start
securityContext:
  fsGroup:
  runAsUser:
  runAsNonRoot:
  runAsGroup:

zammadConfig:
  initContainers:
    zammad:
      extraRsyncParams: "--no-perms --no-owner --no-times"
      securityContext:
        runAsUser:

# OpenShift end

leads to

$ oc exec zammad-0 -- id
Defaulted container "zammad-nginx" out of: zammad-nginx, zammad-railsserver, zammad-scheduler, zammad-websocket, zammad-init (init), postgresql-init (init), elasticsearch-init (init)
uid=1002450000(1002450000) gid=0(root) groups=0(root),1002450000

Thank you very much for your help! 😄

I meant something like this:

# OpenShift start
securityContext: {}

zammadConfig:
  initContainers:
    zammad:
      extraRsyncParams: "--no-perms --no-owner --no-times"
      securityContext: {}

# OpenShift end

I tried 'securityContext: {}' with the "root-securityContext" but this did not work.

Sorry, not an openshift user.
So i an't tes it.

If the user of the container is a problem now, you might need to create an won container with "FROM zammad/zammad-docker-compose:zammad-5.4.0-4" and change the user as you need it.

the same with:

zammadConfig:
  initContainers:
    zammad:
      extraRsyncParams: "--no-perms --no-owner --no-times"
      securityContext: {}

leads to

$ oc get events
LAST SEEN   TYPE      REASON                   OBJECT                                    MESSAGE
...
22s         Warning   FailedCreate             statefulset/zammad                        create Pod zammad-0 in StatefulSet zammad failed error: pods "zammad-0" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, spec.initContainers[0].securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1000680000, 1000689999], provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "elasticsearch-scc": Forbidden: not usable by user or serviceaccount, provider "log-collector-scc": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "cmk-cluster-collector-scc": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "cmk-container-metrics-scc": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount, provider "velero-privileged": Forbidden: not usable by user or serviceaccount]

and just for the record

securityContext:
  fsGroup: ""

does not work either, string is needed 😄

But I am fine with this empty value solution.

@klml this is great news, thank you!

@monotek @klml Before we close this, can we document correct usage in OpenShift somehow? Where would be the best place for this?

I think the readme.
Somwhere below upgrading section?

I'm honestly not a fan of the split way we handle documentation stuff in docker context and the rest of Zammad.

The documentation is permanently out of date due to fast changes that never change back and rely on me having the muse to work the differences out on the paper so it's not outdated. The whole docker context (including helm) is being handled entirely different than the rest of Zammad's universe.

I personally would love to see this addressed to have one source of truth not 3.

That would be nice @MrGeneration. But until Zammad officially supports Docker/Kubernetes as productive platforms, I don't even have the expectation that its documentation must cover it and be up-to-date.

@klml would you be so kind to please provide a markdown snippet for running zammad on OpenShift that we could add to the readme, for future reference and other users? You can just paste it here and I'll include it, or open a MR for it. This would be awesome, as we don't have OpenShift available to do this on our own.

As far as I understand helm/helm#9136 you should better use "null" instead of commenting with "# must be empty". It seems setting a value to "null" removes the value and is an offically supported way of helm.

It is documented in the offical Helm documentation as well: https://helm.sh/docs/chart_template_guide/values_files/#deleting-a-default-key