tnozicka / openshift-acme

ACME Controller for OpenShift and Kubernetes Cluster. (Supports e.g. Let's Encrypt)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Manage Certs for multiple namespaces (but still not cluster-wide)

wgordon17 opened this issue · comments

What would you like to be added:
I would like to be able to manage Let's Encrypt certs for routes in multiple namespaces/projects from 1 single deployment of openshift-acme without requiring the cluster-wide installation

Why is this needed:
In hosted environments, like OpenShift Online/OpenShift Dedicated, users may have multiple projects, but still won't have the rights necessary to create a ClusterRole. Ideally, something like an environment variable that can be comma separated for namespaces. Or better yet, just ask the API for a full list of namespaces/projects that the ServiceAccount has permissions to. That way, the only configuration necessary after initial deployment is granting the ServiceAccount the specific role necessary in each namespace.

Additionally, this would let admins in OpenShift Dedicated to deploy the openshift-acme operator in a locked down namespace that developers can't access, but still allow them to take advantage of the operator's abilities.

@tnozicka

It looks like you have to create the Role in each namespace, and then you can create the Rolebinding...but this at least works with the openshift-acme operator installed in 1 project, monitoring routes in a separate project. It looks like the RFE would just to be able to support multiple projects.

I think this is valid RFE, somehow I though the namespace option can already be repeated, but this should be pretty easy as it is just about setting about different informer per namespace.

I wonder if there is an easy way for auto discovery as you suggested. I guess for each namespace we would have to a SAR on
https://github.com/tnozicka/openshift-acme/blob/master/deploy/letsencrypt-live/single-namespace/role.yaml#L7

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

/remove-lifecycle stale

I think this is valid RFE, somehow I though the namespace option can already be repeated, but this should be pretty easy as it is just about setting about different informer per namespace.

Not as easy as I thought but I have started wiring the new version for multiple namespaces.

If someone could figure out the manual steps for autodiscovering to which namespaces the SA has access to given a Role, that would be helpful and I could wire it for autodetection.

manual steps for autodiscovering to which namespaces the SA has access to given a Role

What are you asking here? Wouldn't this be as straightforward as just enumerating through GET /apis/project.openshift.io/v1/projects, and then for each project, testing POST apis/authorization.k8s.io/v1/selfsubjectaccessreviews for get route, update route, create secret, update secret

I could also just be severely over-simplifying this too

Something in those lines, but I'd like us to run on pure Kube as well so this would have to work with namespaces and you can't just simply list namespaces in OpenShift because of RBAC.

We'd need to check SAR for the whole Role, so finding out if an API supports it or we'd just iterate over the fields in the Role would be a next thing.

I haven't really looked at it yet. I don't suppose it's super complicated.

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

/remove-lifecycle stale

fyi I've wired multiple namespaces for the v2, just not the autodetection yet