jp-gouin / helm-openldap

Helm chart of Openldap in High availability with multi-master replication and PhpLdapAdmin and Ltb-Passwd

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Multiple k8s cluster support

zerowebcorp opened this issue · comments

Hello
Great work on building the chart. This helped me do a POC on setting up openldap very quickly. We have a requirement to build and deploy openldap on 2 k8s each in a different regions so that we can have 2 pods in master to master replication in region1 for HA and same in another region and also master to master replication between the pods from each region

Is it possible to achieve this with this chart ?

Hi,
Thanks 😊
Out of the box , no the chart can’t deploy such configuration.
I think with few tweaks you can achieve it , adding a bring your own server URI config in the values.yaml and use them in the configmap-replication-acls.yaml
Feel free to submit a PR if you’d like to give it a try

My initial thought was that to keep the chart as is, and then add a custom ldif file mounted inside the pod with the configuration specific to the replication between the two kubernetes cluster. I thought of exposing the service as LoadBalancer type so that it gets a private IP address from the virtual network ( I use Azure) and then use that IP in configuring the replication so that both the openldap installations can talk to each other. Theoretically it could work even though k8s will forward the requests to one the running pods in the other k8s, as each pods inside the same cluster will also replicate. I am not quite sure practically how difficult it is or even feasible.

My challenge is to understand how to setup replication. I did review configmap-replication-acls.yaml and also the actual ConfigMap helm produced after installing the release. I am going through openldap documentation to understand what each keyword means.

Any thoughts on the above approach to inject a custom ldif or does it need to be part the same configmap you've in the chart?

That would work , you'll have to provide the complete configuration for the replication through the customAcls variable because you are modifying the cn=config object.

Although due to #115 I wouldn't recommend using this in production because you won't be able to use the chart to update the configuration.

Interesting, so the customAcls won't be applied (at the moment) once the bitnami image creates the initial database and restarts for the first time.
Since it is a onetime apply, I was wondering if we can mount the config and then manually ssh into the bitnami openldap pod and apply it through commandline. (or use a custom sidecar)

The other option is what you have suggested, which is to modify the chart and add a flag for remote sync on configmap-replication-acls.yaml so that it is applied initially when the database is built.

The serverID is something I've to tackle when running multiple installations. For example, in one aks it would be 1 and 2 for the 2 pods

image

This would be the same for the other k8s installation as well. Still reading through openldap documentation but my assumption is that it needs to be unique.

olcServerID: {{ $index1 }} ldap://{{ $name }}-{{ $index0 }}.{{ $name }}-headless.{{ $namespace }}.svc.{{ $cluster }}:1389

I don't know if anyone in this community has attempted with a multi cluster installation. I am going to attempt downloading the helm chart and make changes to see if it works. Any direction on this is appreciated.

That would work , you'll have to provide the complete configuration for the replication through the customAcls variable because you are modifying the cn=config object.

Although due to #115 I wouldn't recommend using this in production because you won't be able to use the chart to update the configuration.

Is there a reason why a new ACL is required to be created if we are using same admin password for both the k8s installations?